That reminds me of the stake Ubuntu Core was born with in its heart:
> An Ubuntu SSO account is required to create the first user on an Ubuntu Core installation.
They just keep flinging shit at nothing and hoping to hit a wall they can build a gate in.
They're trying to boil the frog slowly with Snap on the Server/Desktop branches.
There is no possible genuine motive for these maneuverings to be in the position of gatekeeper.
If anyone at Canonical is listening, you should be aware it doesn't matter how slowly and carefully you approach this, or how you justify it, the first time I'm forced to kiss the ring to get my software to work, your software is gone from any system I own or manage, immediately and forever.
You're not going to get within a thousand miles of monetizing the ecosystem by gatekeeping it -- the moment you even so much as assert the position of gatekeeper you're trying to create for yourselves you're dead to me.
To be completely honest, if Canonical ever wants the theoretical "year of the Linux desktop" to happen, they need a way to get proper cloud accounts working.
A big part of the value of Apple's ecosystem is iCloud. Microsoft has done the same thing with the syncing and cross platform capabilities of their Microsoft Accounts.
Most Linux users today don't care much if they need to create separate email, cloud storage, backup storage, password recovery and application store accounts. That's because many Linux users have a preference for one service or even self host.
Normal people aren't like that. A desktop without a cloud account is like an Android phone without Google apps or a theoretical Apple phone without Apple apps. Technically fully functional and customisable via sideloading, but ultimately pointless because nobody wants to go through that. Just ask any regular Jane or Joe on the street if they can set up IMAP email on their phone; I bet they just use Gmail or Outlook.
It's not the Linux power users Canonical wants to attract; it's regular users. Their requirements are different from yours or mine. The Ubuntu Core website specifically says it's built for business. It's focus is on configurability and enterprise deployment, making it closer to a managed Windows 10 enterprise network than something you run on your laptop at home. Lockdowns and gatekeeping are practically what the service is intended to do.
For my money, the year of the Linux Desktop was 2008. It's just that nobody noticed.
Since 2008, Linux has had the best combination of performance and hardware support on older machines, and the advantage has only grown since then. Windows and MacOS keep falling further behind in usability on machines older than four years, and that's the only hardware I would purchase for personal use. Why throw your money away on planned obsolescence, on an OS that keeps getting worse and worse, when you could use Linux on an older machine and have things keep getting better and better?
I noticed! I've been using Linux for decades, and Ubuntu since... 12.04 I think? It's the Linux Desktop for me. I game, I work, I watch movies, play music, connect to wireless stuff, I do pretty much everything I want -- it's a joy. (I always like telling how my wireless HP printer is easier to set up on Ubuntu than on OSX. It simply works on Ubuntu, but the Mac has trouble discovering it...)
I've been slowly moving that way for years. I started dual-booting Windows and Ubuntu, Ubuntu for coding and Windows for everything else -- but over time I've been spending more and more time in Ubuntu, until I looked up one day and realized I used Windows for gaming and Ubuntu for everything else. I was going weeks without booting into Windows.
And then last week after getting my computer back from repairs I did a clean install of Pop!_OS. No more Windows at all -- I have a fully Linux desktop.
Granted I ended up leaving unallocated space for a Windows partition (shrinking a LUKS partition was not very fun, it took me two attempts) for when I eventually want to play some game, but who knows when that will be.
Anyway, yeah, point is, for a sufficiently tech-savvy user, the year of the Linux Desktop has arrived. (And I could be wrong, but I think that threshold is actually pretty low. I'd guess that as long as you're not afraid of technology, you can handle installing Linux. Pop!_OS has a great installer and app store, too.)
> MacOS keep falling further behind in usability on machines older than four years
The huge user base of folks running thunderbolt 2 based MacBook pros from 2015 or earlier seems to disagree and a vocal minority of them say you can pry that model from their cold dead hands here on HN. It’s supported by the latest OSes with zero penalty, performance or otherwise.
In terms of usability, yes, but the problem with Linux in recent years has rarely been usability, it's the lack of workstation-level applications that are readily available on Windows and MacOS.
As long as Linux lacks native support for Autodesk, Adobe CC, Office 365, FL Studio, Pro Tools, etc. and dozens of other niche programs by ISVs it will never approach the mainstream with any sort of widespread appeal. Currently Linux on the desktop is suitable for people who are aware of these limitations and either circumvent (through technical means such as WINE, which are opaque to the typical end-user) or tolerate them, acknowledging the limitations and doing nothing to mitigate them while embracing Linux's strengths as a web and educational medium (think Chromebooks).
I legitimately have tried going Linux-only several times in my life and I've just come back to Windows every time. I know it's not ideal from a security perspective, a privacy perspective, etc. but there is just zero competition with Windows in terms of a software support perspective. Windows' backwards compatibility and range of software support is a masterclass in how to enable productivity. Even MacOS doesn't come close, with how frequently Apple breaks stuff that's expected to be supported (32-bit app support, Rosetta, etc) and how frequently they change architectures (68k > PowerPC > Intel > RISC, vs Microsoft with 32-bit Intel > 64/32-bit Intel)
First, it wasn't usable. Then, it didn't have drivers. Then, it crashed a lot. Now it doesn't have Autodesk stuff? For the record, my mom doesn't know what Autodesk is. A "desktop" for me is where you can play movies, edit documents, browse the net and check email.
And Linux is fantastic for that, until you come across a use case that is outside those typical bounds.
Say your mom decides she likes photography one day and decides to buy a camera and take some lessons. Say her teacher says "we use lightroom and photoshop so you'll need to have these because they're industry standard" and she has no idea what to do and why the Adobe CC launcher won't load on Linux. Then she'll call you up and ask why it won't work, you'll respond with "there's no photoshop, you can use GIMP though" and your mom being a regular computer user won't be familiar with the eccentricities of GIMP as editing software and will be lost and unable to follow through on her new hobby.
This has literally happened to people in my life I've recommended Ubuntu to. Too many times to count.
That is a real problem, but in my opinion unrelated to whether an OS is "ready for the desktop". That's just a business problem, not a technical problem: Adobe cares about platforms where the money is.
It's not a technical problem because people like my mom can learn to use GIMP (note: people who use photo editing software are "pros" of a kind, anyway; "regular" users don't know how to edit photos, either) or Krita or whatever. But even if you fix GIMP's UX issues, it's still not Photoshop or Lightroom, so the problem remains: there's no Adobe software for Linux.
But that's not what being ready for the desktop is about. That's not what a desktop is for most people, either. All in my opinion, of course.
PS: if Lightroom magically makes it to Linux (hypothetical thought experiment), but then it's Overwatch or some other high profile AAA game that doesn't run natively, is it still a problem of the Linux desktop?
>That is a real problem, but in my opinion unrelated to whether an OS is "ready for the desktop". That's just a business problem, not a technical problem: Adobe cares about platforms where the money is.
Only back in the day part of the idea around "Linux for the Desktop" was that everybody would use the "better" FOSS programs, and not wait for MS/Adobe/Autodesk/Avid/etc.
But, as you note, this hasn't happened, and "even if you fix GIMP's UX issues, it's still not Photoshop or Lightroom".
>But that's not what being ready for the desktop is about. That's not what a desktop is for most people, either.
Well, pragmatically the desktop is a Windows machine, which just works, doesn't require them to think long and hard about which cpu/memory/laptop/peripherals/etc to buy, has drivers for all of their devices, has all kinds of apps they might use (beyond email and web), and so on.
> Only back in the day part of the idea around "Linux for the Desktop" was that everybody would use the "better" FOSS programs, and not wait for MS/Adobe/Autodesk/Avid/etc.
That wasn't really like I and many others envisioned, no. Free software have other benefits that make them "better", not merely technical issues.
I'd say gaming is basically there now and it only gets better each year. I often buy games without checking if they are compatible and they run flawlessly.
If she gets seriously into photography the cost of a Windows machine is the least of the expenses she's heading into, but I understand that a main machine running Windows is a good insurance. And yet I didn't have one, even a backup, since 2009. I won't be too worried.
Those are just the common stuff that everyone does with a computer, but I think most people ends up adding one special thing to that list. For some that is gaming, others it's doing tax, photo editing, programming, or some specialty software for a niche hobby that few people think about.
Even my mum ended up using specialized software software for her knitting hobby.
I'd say you can do almost anything except the specialist software stuff. By specialist I mean "brand stuff"; for example, the problem with a photo editing course is that instead of teaching you the principles, they often teach you how to do stuff in (say) Lightroom, essentially compounding the problem. I understand why they do this -- it cuts corners and assumes a common ground; supporting a multitude of editing software the instructor might not be familiar with would be a nightmare -- but it doesn't help the situation.
If they taught you the principles of editing photos, without using brand specific names, you could do this with GIMP or Krita.
I do hobby stuff with my Linux laptop, by the way :)
Agreed. Though I think the main problem is that an instructor cannot be expected to provide guidance for a dozen different tools. I understand the problem, I just wish they could approach this from an angle of general principles instead of specific tools.
This is something I don't think enough people consider. There's a very long tail of niche software for niche use cases, and even "casual" computer users are likely to end up needing something from that very large set.
The problem I see is increasingly that people who want to do the non-professional things lean towards tablets instead of desktops for them. Most people use their “desktops” for work and hobby things where what software can run on them is incredibly important.
Totally! I have to say in this regard, a Linux desktop is very satisfying. I game, watch movies, code, do videoconferencing, print stuff, edit photos, all without much tweaking -- if at all.
Reality is a moving goalpost itself. The desktops (Windows, macOS) aren't in some standstill waiting for Linux desktops to catch up.
But what you described isn't a "moving goalposts" case anyway. It's problems that need to be solved one after another and which all were initially present as issues.
Since at first, it wasn't usable, it wouldn't matter if it has Autodesk apps. Usability was more important.
Once it got usable, it still didn't matter if it had Autodesk apps, when it didn't have drivers.
Then, when it got the drivers, that only meant you could use your printer, wifi, bluetooth, etc. But the "crashing a lot" would be a problem greater still, greater than the lack of commercial apps.
>A "desktop" for me is where you can play movies, edit documents, browse the net and check email.
And for billions of people it's those things, plus work stuff they need to be able to run, multimedia creation they need once in a while, game stuff they like to play and so on...
Now that it does have most things in order, the lack of commercial apps is another problem it should work on (when it comes to people's needs).
> the lack of commercial apps is another problem it should work on (when it comes to people's needs)
I think that's a losing proposition. People like me who use Linux have mostly everything they need -- and there really is a lot for the Linux desktop these days. The people who need "commercial apps" are on Windows and MacOS already, and for them, the Linux experience will always be subpar (I'm assuming for them the vast quantities of FOSS are either not enough, or not easy to use). Moving to a "commercial desktop" like Canonical seems to be trying to do will result in a failure and alienate the current userbase who loves Linux for what it is. Attempts at gatekeeping, doubly so... just look at the reactions against every move Canonical does in this direction.
> Linux lacks native support for Autodesk, Adobe CC, Office 365, FL Studio, Pro Tools
I understand your point but I've been using Ubuntu since 2009 and never needed any of those programs. I do another job (web development, mostly backend.)
Actually I had to run the real Excel (not LibreOffice) for a project many years ago. I created a VM with VirtualBox, got a license for Excel from my customer and installed it there.
> As long as Linux lacks native support for Autodesk, Adobe CC, Office 365, FL Studio, Pro Tools, etc. and dozens of other niche programs by ISVs it will never approach the mainstream with any sort of widespread appeal.
Meh, I think it's not an all or nothing situation. My client uses exclusively Windows on the desktop but basically the most "advanced" applications they use are Excel and Outlook. Practically all their line of business software is web-based. Most run on Windows for some reason, but as far as the user is concerned, it's just a web app.
I bet that the situation is similar for most companies, and the other tools you mentioned aren't all that widespread. Of course, for the companies that use them to make money they're important. But I would say that's "specialized use" territory. I bet most of the people working for my client (> 3000 employees) have no idea what Pro Tools are, and they are more or less in the media sector.
I personally use the Adobe CC photography suite as a hobby. And although for the moment only lightroom is concerned, I get the feeling that adobe is looking to move the products to the cloud. Microsoft has already done this with Office 365.
For my (admittedly limited) use case, installed Office is dead. It's a pain to install, takes forever to update, and most importantly, works way, way worse than the online version. Outlook online doesn't lag and the messages scroll while I move the slider instead of waiting for me to let it go. Just like it used to before they switch to the new interface around 2013 or 2016 with all the white space. The online version even works on Firefox on Linux. Teams works on Linux too. Remoting to windows servers is way more practical on Linux (Remmina) than windows. So I would say that for my client, the reason they still use windows is basically inertia. I'm using Linux full time and never had an issue interfacing with the other people working there.
I am not quite sure it was that early, but I agree it is in the past. Specifically, I contend that the year of the Linux desktop was some time before 2015, when Microsoft made (a version of) Windows (10) available gratis.
The only cloud accounts users care about are logins for all their web based services which are synced by Firefox or Chrome.
They already have for example an email any web based services that come with Ubuntu are just something the people who actually use Ubuntu have to click through.
Normal people who by and large do not actually use Ubuntu would probably be more likely to sign up for such options if they are the default option but no more likely to actually use them.
The comparison to side loading is particularly badly made. On mobile platforms you need a platform specific account to install even free apps because ultimately also the way you consume paid for content.
Linux desktops all come with a platform specific app store that requires no particular integration because it is used to consume free content.
Paid content is consumed in the web browser.
Games are largely consumed via steam.
Paid apps are a harder sale because of the Linux desktops 3% market share and major apps that do offer support not wanting to give 30% to and platform.
Only people trying to make hundreds of thousands to millions of small sales via access to the massive ios or Android markets are willing to give up 30%.
Normal people don't know or care which cloud accounts they're logged into. I don't think I've ever heard of anyone talk about a (personal) Microsoft Account as a positive. People care about syncing their photos between their computer and their phone, sure, but neither Microsoft Cloud nor Canonical's whateveritis does that. People don't know how to get their emails from these extra accounts, iCloud included.
These OS-cloud-account systems may make sense for businesses but they absolutely don't help home users.
...ideal? Desktops have always come without a cloud account in all computers I've used since my first Commodore 64. The thought that a cloud account is somehow a requirement for a "desktop" environment seems so alien to me I can't even begin to see what our ideas of computing have in common.
An internet connection is required, sure. But a "cloud account" is the invasive thing I try to avoid or remove, each time more frequently, from my home computers.
There are millions of people using Google sites on browser(no apps) on windows. There is no silver bullet "make it cloud ready" to make Linux desktop happen.
Honestly I think it’s simpler than that. The Linux desktop experience is just terrible... I have never successfully installed a Linux distro without having to fiddle with some boot flag or some config file. Every major Linux desktop environment has been unreliable at best. Software support is fragmented across dozens of distros.
I think WSL has been the biggest win for Linux on the desktop, but ironically it’s on windows.
Don't get me wrong, I love Linux, I've had it on my machines in many different flavors, ever since 2010. However for at least the past 3 desktops I've had, there were always some issues during installation or fiddling to do after installation to get things running smoothly.
For example, my current home PC has a Intel i9-9900ks on a nice gigabyte motherboard and a NVIDIA 2070 Super. When I try to boot into Ubuntu 20.04, the display driver always fails (black screen / a mess of pixels). I had to boot with some grub flags in order to get to a stage where I could install the proprietary drivers. This is because of the open source drivers for NVIDIA GPUs being terrible, it's not the fault of Linux but that's not the point, it's a bad user experience for at least half of the users with a fairly modern discrete GPU.
A few months ago my girlfriend (who has a degree in computer science and is currently getting her Phd in mathematics) tried to install Ubuntu on her Razer Blade 15 (2018). Initially the she had some trouble with getting the NVME drive to show up (it was an issue with UEFI settings), we eventually fixed that but after installation, the wifi didn't work. At that point she gave up and ended up going with WSL, she was up and running 30 mins later.
Maybe if you buy a dell developer edition or thinkpad, you will have no troubles.
The average user is not going to want to hear that they have to either return their NVIDIA card and get a AMD one (a nonquestion for people who need CUDA / tensorflow) or go read some wiki about how to resolve the issue. They will just go back to windows/mac.
Same here, and I prefer Linux systems that give me more control, so I take the time to install Void and Gentoo on all my machines (and even my primary dedicated server runs Void).
But whenever I try to install Ubuntu or Fedora on a laptop, it just installs and boots fine.
I recently noticed how severely is Android limitted in this during the recent Android Firefox update fiasco.
On desktop linux I could have just backed up the profile folder and downgraded the package & it might not have hit the repositories at all due to negative user feedback.
Example from 2020: Running Ubuntu with two displays each of which has a different scaling factor requires switching to Wayland, because on X11, any window that touches the edge of one of the displays will run unusably slowly. (I don’t know why.) A non-technical “typical desktop user” would never have figured this out.
This is on a fully supported Linux machine (XPS 13 9300 Developer Edition).
Well, thats mostly due to Ubuntu still defaulting to X. Fedora switched to Wayland by default back in 2016 and even RHEL 8 (released 2019) defaults to Wayland.
I feel like that just reinforces my point. Figuring out which random unsupported distro to install on the Linux laptop you bought is even less feasible for non-technical users than choosing a different login session on Ubuntu.
Linux desktop experience with tweaking is in my humble opinion best in class. The linux desktop experience out of the box is acceptable on systems where it works.
Fantastic is a matter of opinion and I have a feeling that the majority wouldn't share that opinion.
People with your opinion are most likely used to buying hardware that is supported or at least not buying stuff that almost certainly wont work and installing and moving on. You think the parent poster is full of it.
People with the parents opinion are used to buying something that works well with windows trying to install Linux finding 3 things that don't work well fixing 2 out of 3 and quitting in frustration. He thinks you are full of it or willing to devote far more time to fixing broken shit.
Graphic performance is excellent if you don't insist on say using Nvidia hardware with the open source Nouveau drivers everyone knows sucks. Open source AMD and closed source Nvidia both provide good performance.
Going with just US figures the average spent on a computer in 2020 is 632 usd total. A high end card on the other hand is $400 to $800 each. a 2000-3000 pc with 800-1600 in graphics hardware is probably somewhat up there in the 99th percentile of configurations its pretty niche even so I cannot imagine any even technical challenges whatsoever with all the monitors hooked to the same GPU whereas a high end card ought to support 3 monitors.
So now are we talking about a 2000-3000 dollar pc with 2 dedicated GPUS with 4+ monitors?
Brief research seems to suggest this is challenging. Improving it would also seem not to benefit many users. Even talking merely about windows gamers only 1% used crossfire or sli.
There is even a plausible solution for people who want lots of displays so long as they are ok with a single gpu.
Although supporting 2 to 3 monitors is incredibly common there are actually cards which support 4-6.
I give you a variety of points and all I get back is a single flip quip. This is a disappointing rejoinder.
You CAN have a high end desktop. You can even have 3-6 monitors if it makes you happy. The only thing you cannot do is plug your monitors into the outputs of both GPUs. You must purchase GPUs wherein a single GPU has sufficient outputs and let the secondary GPU serve to aid in your favorite GPU compute or game playing adventure.
Is it possible to have two cards and have the second card do computations for the first one to render? I would have thought the time it took to move the data would make that hard.
Generally one actually runs games on one monitor in the first place so this is the only way it even could work. Work can be divided up by frame for example.
Crossfire and SLI rely on a high speed interconnect and game specific support.
Here is an example : take an Intel nuc : Ubuntu, everything works out of box. Win10 ? Let's download a WiFi driver on Intel site ! Let's hope you have Ethernet or a windows 10 supported wifi device. And no, the wifi dongle lying around is too old, windows 7 only...
Bought an Intel NUC, installed Ubuntu, no hardware-accelerated video decoding on Firefox and Chromium needed to be compiled manually with a patch.
This functionality works out of the box on Windows (not that I installed it, as Windows 10 is a dealbreaker to me so I ended up returning the NUC and choosing something else).
Have a thinkpad X1 Carbon Pro and have run fedora, ubuntu and Qubes OS on it at various points. Have Ubuntu on it now (as that's the distro my company wants everyone to use). In all of those distros the install has been completely trouble-free and everything works completely fine.
Also I don't use wayland and don't have the scaling perf problem that someone else was talking about.
Have an X1 Yoga here with Arch on it, it was painless.
Bought an X390 for my mom last year, tossed Ubuntu on it after putting in a larger NVMe. (she prefers to use a more 'mainstream' distro so she can follow tutorials to setup new dev environments for things she's tinkering with) Her Macbook Air is weighing down paper now.
My guess is the contrast in experience can come from difference in hardware. Drivers can be a big hassle for desktop if you're unlucky. Occasionally when I get new hardware that's not a laptop it can be frustrating with GPU or (curses) Bluetooth drivers. It's been improving a lot, though. Lenovo and Dell have really gotten better over the years.
Funnily enough, Windows is going to phase out drivers from Windows Update, so I can see the scales shift.
My pains with Linux rarely have to do with hardware. It is a gigantic complex incongruent mess of an operating system where software routinely requires hours of my time to make it work properly, google searches have a tendency to land you on 5 year old pages, which in Linux terms means they are now 3 major revisions out of date, and the community is full of condescending evangelists.
Maybe Linux would get wider adoption if its evangelists weren't so condescending? Maybe instead of assuming everyone who's had a bad experience with your OS must be missing something obvious, you should give the benefit of the doubt?
Until earlier this year I ran Lubuntu on 4/5 of the PCs I own. I've tried other distros but that one worked the best for me and it was still a giant pain in the ass. There's only 1 Linux PC left in my fleet now and it is mostly because I haven't turned it on in 6 months.
Not sure if the replier was only specifically talking to you. I don’t like Linux [on the desktop]. Most responses in geekier online communities do seem to be smug and/or assume you don’t know things.
That was happening in this thread too outside of you.
For reference, I too have spent a decent amount of time with Lubuntu. I doubt I’ll switch from Mac for foreseeable future.
My experiences with Debian have been pretty clean outside of needing to pre-prepare the binary blob package for the ethernet and wifi drivers on the slightly older laptop that I wanted to install it on. This was expected due to Debian's stance on non-free packages on the install media.
Pretty much any other modern distro install has gone 100% cleanly with no prep-work needed.
That pretty closely matches my experiences as well. The only exception is that my first experience with Linux on something other than a raspberry pi was an optimus laptop.
NVIDIA issues aside, I've never had any issues with using a desktop oriented distro as a desktop. Way fewer issues than I've had with Windows.
Installing software that isn't in your distro's repo, either because it is too new, too old, or was just never adopted by a maintainer, is usually a pain in the ass.
Software routinely requires tiresome google searches that largely land you in out of date non-documentation in order to get basic functionality (like say, drag and drop) working.
Getting help from the community is basically impossible because whatever you're trying to do you're "using the wrong software/distro" or "don't really want that".
Oh yeah, and any time you bring up these sort of things someone like you shows up to do their damnedest to dismiss these complaints. Often by comparing these problems to similar problems in Windows, as though that is somehow changes anything.
That's the wrong bet to make, because those are the people that will end up using the Microsoft WSL.
From what I can tell, the problems with Canonical will sort themselves out when the market they've created for themselves is eaten by Microsoft's incorporation of Linux into Windows.
The only people left using 'pure' Linux/Unix will be those who care about quality software, FOSS, and privacy. I hope that day comes sooner than later.
Ultimately, you can run Docker on a VM for local development, and of course you'll run it on a Linux server on production. But you don't need Canonical gatekeeping for that...
Interesting. I'm a fulltime Linux user, but wanted to try a macbook for my current job so they gave me one. I still haven't done any serious work with docker, but people tell me it runs like molasses on mac. Is that your experience? If so, I might want to switch back to Linux.
* It is slower (might not matter depending on your workload).
* It requires manual tuning to set the correct RAM limit (whereas Docker containers on Linux are just normal processes so they just use however much RAM they need like any other process).
* It's difficult to use the normal administration tools you're familiar with: you can't, from the host, see containers running in top, or attach a debugger to them. You have to either open a shell in the container and install all the typical GNU/Linux tools there, or have some weird remote debugging setup.
To go back to Snap for a second: didn't Linux Mint drop it recently?
I'm not happy on Ubuntu with snap because of what seems to be auto-updates -- maybe it's another thing that's auto-updating?
I use Hugo [0], whose binaries are still < v1.0, and my websites break when they upgrade so I want to do that manually. Imagine my surprise after an update / reboot to see I can't build my sites anymore. This has happened several times.
I actually liked the idea of the snaps on paper, especially combined with Ubuntu Core for running lightweight VMs (via LXD). But the tone on that exchange and the "we know best" attitude put me off it.
I have an Ubuntu 20.04 installed for tests on one of my home machines, and from time to time I catch doing who knows what. What tips me off is seeing the drive light going crazy when I'm doing basically nothing. And for some reason it doesn't log anything while doing this...
On paper snap is great, but at this point we've got better alternatives. If you want to use something snap-like either flatpak or appimage will likely work and be way less user-hostile.
I didn't look closely into either flatpak nor appimage. But I get the advantages of such a system and I think it might actually help with the "Linux on the desktop situation". It's very easy to deploy something with restricted access to the file system, something that has all its dependencies in one place, that updates atomically, etc. It's also nice for random app vendors to be able to know that all their users are on the latest version, or not far behind. Could even push some devs to consider going back to desktop apps, as opposed to the web, knowing that they wouldn't have to deal with 1234 different, obsolete versions because someone's grandma didn't update her system in years.
After spending some time reading the exchanges on the posted link, I've come to the conclusion that the actual problem with snaps might not be snapd nor the automatic updating in and of themselves.
Rather, the problem seems to be a scope mismatch. This whole snap thing looks to me as something that should be aimed squarely at the "desktop user". I use that term as opposed to "system use". So distributing Spotify or whatever via snaps? Great! LXD for a home user to test? Sure! But as a way of managing low-level system components on production boxes? No way!
So I have to say I don't necessarily take an issue with Ubuntu pushing this on the desktop version. Sure, it is a pain by certain aspects, main one being slow startup for apps. But pushing this on Ubuntu Core in its actual state is, to me, the real issue. Sure, apparently you can run your own "Snap Proxy" or whatever they call it, which allows you to manually approve upgrades, etc. But why go through this trouble? It seems easier to just run apt-get from a central location when you want and for the packages and versions that you want.
Yes, auto-updates are built into snap. Also, the "solutions" to disable it are pretty much workarounds [1]. All this looks like just what made me stop using windows 10.
Sigh. I'm a developer that is on early-version binaries often, and need to keep them frozen and update them manually. I shouldn't have to default to compiling from source.
Can I bypass snap entirely without major issues? I like Ubuntu, but indeed, I left Windows 10 officially due to auto-updates.
The broken updates are what essentially forced me off of windows. Basically I get some awful untested update every 3-4 years that causes a boot loop. Happened in Vista, 7 and 10. The worst part is that the broken update stays in cache and forced updates basically brick your machine every single day until you decide to simply disable the windows update service, delete the windows update folders in hopes that this fixes the problem or reinstall the entire system hoping Microsoft removed the boot loop update.
You realize that's literally just to fetch SSH keys, right? It's the authentication method so they don't have to provide a default password. It's not some sort of phone-home method.
I, like GP, noped out as soon as I saw the SSO requirement. I also noped out of Microsoft's /free/ embedded offing for the same reason.
I don't care why it's needed. I'm not interested in voluntarily tying my business to your product if you're going to restrict access to it this early on.
It's true. I've long been a fan of using Ubuntu on the server because its consistent release and long-term maintenance schedules make planning a lot easier than with Debian, and because the server stuff was relatively stripped down. But the gradual creep of snap into the server is really worrisome. From minor things like a bunch of annoying loop filesystems on a default install, to truly problematic things like how removing snapd led to a broken OS after automatic security upgrades, forcing us to relent and leave snapd installed. And the fact is, as you say, this is a clear pattern for Canonical.
> If anyone at Canonical is listening, you should be aware it doesn't matter how slowly and carefully you approach this, or how you justify it, the first time I'm forced to kiss the ring to get my software to work, your software is gone from any system I own or manage, immediately and forever.
They're taking a look at how Microsoft is slowly doing this with Windows, and following suit.
There are so many FOSS projects out there that we use at work that we're willing to contribute financially to. Either for support, or for specific feature development, or as as donation masked as one of the two. But as you mentioned, the second a vendor introduces code that creates _another_ external dependency, we have to walk away.
>You're not going to get within a thousand miles of monetizing the ecosystem by gatekeeping it -- the moment you even so much as assert the position of gatekeeper you're trying to create for yourselves you're dead to me.
On the other hand, at that point they might finally be a great option for millions of Windows / macOS users...
Can I ask - do you contribute meaningfully to any open source packages?
Do you buy software at a significant commercial level?
I've noticed this sort of total absolutest, outraged, scorched earth style sometimes comes from the free riders. At least in some businesses I've worked with - if you can eliminate customers that approach things in this way your staff are happier, your life more relaxed AND you make more money.
A customer paying $100K/year sometimes has really competent folks who have pretty reasonable perspectives - a pleasure to work with. Someone paying $20/year can just rage and rage on forums, endless support tickets for user error issues etc, no perspective in terms of what is going on. Not sure if this has been seen in broader contexts.
"the moment you even so much as assert the position of gatekeeper you're trying to create for yourselves you're dead to me."
I think Redhat did the gatekeeper to 30 billion route. If you are a customer paying for support from them might be worth giving them feedback and more importantly a suggested way forward that works to pay their staff.
What's weird is part of the value ADD for other companies in the tech space is the cloud account that you are railing against. Microsoft has basically forced it on their windows user base, but provisioning users now isn't so bad if you want to give someone office, just add their email to your admin page and entitle them. Boom, they can self solve even at home. Google has made their cloud account EXTREMELY valuable. The same "you are dead to me" ragers are often knee deep in google account / xbox account / netflix go anywhere accounts etc.
> Can I ask - do you contribute meaningfully to any open source packages?
Yes.
> Do you buy software at a significant commercial level?
No.
> I think Redhat did the gatekeeper to 30 billion route.
Where's the RedHat app store?
Which version of RHEL is targeted at embedded devices yet requires an internet connection to install to validate your SSO credentials?
Their business model is support and to my knowledge they have made no attempt to maneuver into a gatekeeping position in envy of Apple and Google's app store rents.
> What's weird is part of the value ADD for other companies in the tech space is the cloud account that you are railing against.
Yes, they've been told the cloud is this magic thing which allows them to abdicate administrative and operational responsibility for their infrastructure if they just pay 10x as much. And they believed it. If Canonical wants to milk that cow, good luck and godspeed. Debian awaits. I am not afraid of complexity, I know the "cloud" score, and have no interest in abdicating any authority over my personal systems or systems I am professionally responsible for.
> The same "you are dead to me" ragers are often knee deep in google account / xbox account / netflix go anywhere accounts etc.
I don't own a console. I don't have social media accounts. I don't subscribe to media services. I host every network, compute, and data service I consume to the extent that is practical -- moreover, they are all publicly available and free to use without advertising or donation. My condolences for the frustration you've experienced monetizing that straw man, though.
> I don't own a console. I don't have social media accounts. I don't subscribe to media services. I host every network, compute, and data service I consume to the extent that is practical -- moreover, they are all publicly available and free to use without advertising or donation.
I think the key phrase here is "to the extent that is practical" — it depends on your definition of practical. I say this as someone who likely has a similar definition to you.
Self-hosting, for various definitions of that, needs to be easier and more seamless for more people to want to do it. There are some products/services that aim to address this; there need to be more.
You're making plenty of unfounded assumptions about the parent poster's personal character.
One is not obligated to contribute to open source packages or pay for high-price contracts to provide feedback on a company's software on a public forum. (And the server portion of Snap is closed source.)
Just because Microsoft and Google do worse does not mean that Canonical cannot do better.
Open source contribution is not a requirement for providing feedback but people who actually understand the industry are more legitimate in most people's eyes.
This is a discussion about Canonical's strategy, which happens to involve technical products and services -- it is not a technical discussion. I see no reason to think that someone's software development chops would correlate with their ability to meaningfully contribute to this discussion.
We're talking about the desktop computing industry, not the OS development industry. You don't need a PhD in computer science to tell a good user experience from a bad one.
> The same "you are dead to me" ragers are often knee deep in google account / xbox account / netflix go anywhere accounts etc.
While I am not GP, I am what of these people. I stay as far away as I can from cloud accounts unless I have the option to self host. I self host Nextcloud and I am soon going to self-host Matrix.
I self-hosted and later switched to having the Matrix staff maintain my chat server [0]. I assume this is a reasonable middle-ground between self-hosting maintenance headaches and giving up your ownership for a free Discord server.
I think it is great to use F-Droid or Nextcloud if you want to! You can also use Debian or Arch if you want to. Why is it so bad that Ubuntu tries to innovate in their own way? It's not that there are no other linuxes to use?
"Interestingly, Canonical actually released an open-source prototype Snap store backend a few years ago, but there was very little interest from the community in in actually maintaining and running a second Snap store, so the project bit-rotted and became incompatible with the current Snap protocol."
That open-source server was a single-python-file hacky prototype written in an employee's spare time. It's not surprising it got little interest.
Not only that, but this is the biggest omission of the truth: Making your own server is no longer possible because of assertions. All Snaps are now digitally signed by Canonical, so you actually need to have the end-user install a forked snap tool on their system to access the custom repository. You cannot disable this signing - your only way around it is to manually download the snap and install it with `--dangerous` through the CLI. And you won't get auto updates that way.
Isn't this good? I mean if you are a distribution you compile the software yourself so you can add the distribution keys (there are some that don't compile stuff ).
Probably you are thinking there should be atext file in your home where you could add new signature to be sued, but that could be a security issue so probably needs to be something more safe,did any serious patch was sent to improve this and was it rejected or why we expect Canonical to prioritize this over other issues?
So you want something like a PPA? Is flatpack promoting each app will have it's own repo(Skype.Dropbox,Slack,Discord?)
Seems to me that you want a way to have applications and updates installed by people and bypass the review process? If you are a distro maintainer you can change the hardcoded values before you build(distros always change this stuff) but if you are not a distro then you should either submit the app for review or do what other apps do and prompt a notification that a new update is available with a link to a download page.
> Is flatpack promoting each app will have it's own repo(Skype.Dropbox,Slack,Discord?)
Flatpak CAN support the scenario, where each app can have it's own repo. It's an option, not forced down your throat. Or you can self-host your private repo (e.g. applications used inside your network) IN ADDITION to public repos.
> Seems to me that you want a way to have applications and updates installed by people and bypass the review process?
Yes, I want to have Canonical out of the way. They can have their own repo if they really want; those who want curation by Canonical are free to use it. But I want to have options of different repos, with different curators. Canonical doesn't deserve to be the single gatekeeper between me and the apps.
> If you are a distro maintainer you can change the hardcoded values before you build(distros always change this stuff) but if you are not a distro then you should either submit the app for review or do what other apps do and prompt a notification that a new update is available with a link to a download page.
And this attitude is going to be a major reason why most distributions and users won't adopt snap. It will die the same way as other Canonical dead-ends did, while damaging the Linux ecosystems for a few years.
if you are a user then you use a distribution, say Debian,Arch. This distro could setup whatever store, repo and keys they want and Canonical is not forcing themselves on other distributions (it is not systemd).
I am looking at this from a dev point of view, you are asking to prioritize a feature that almost nobody will use over other high priority tasks. The article mentions Canonical did this mistake in the past with Launchpad and it was for nothing. The store code seems to be a mess and you can't just drop the code and GitHub and you are done, it is a lot of work to refactor the code. Same for the client, it is a of work to correctly add the feature to support multiple repositories, if it was only 1 day work somebody would have forked the client, patch it and published it already.
After you work as a developer and see many bugs or problems you look at things differently, for example as a dev I could allow access to an advanced feature in my app, it seems to be just a few lines of extra code. But the reality is that this costs more
- maintenance cost as the software evolves and things under the hood change
- this new feature needs to be documented, someone needs to write documentation, maybe record videos so the minimum number of people contacts support for help
- people don't read documentation so people will contact support and complain that the feature is not working as they expected or that is should have more X or Y
- since is an advanced feature some people might break their stuff and then complain and make a lot of noise and as a dev I need to go and add many checks in the code to prevent people breaking their shit.
> This distro could setup whatever store, repo and keys
But only one. And changing it is a matter of replacing binary/entire package. If is a far cry from `flatpak remote-add $url`.
> Canonical is not forcing themselves on other distributions (it is not systemd)
Systemd did not force itself on distribution either. Systemd was solving real problems distributions had, and others were ignoring them. Including Upstart, which only made it worse.
It's not like systemd had easy start. Even Redhat's management didn't see the point (Lennart P. couldn't work on it during his working hours). It took adoption by Arch to change Redhat's position.
> I am looking at this from a dev point of view, you are asking to prioritize a feature that almost nobody will use over other high priority tasks.
How do you know? Did you do any research among users, or it is just against Canonical modus operandi, where they try to insert themselves as gatekeeper in some layer of Linux community and be the single entity that can be approached for a given topic?
For many, it has much higher priority than forced updates, enforcing a single signing key or paid custom stores for FOSS.
> The article mentions
The article does not make honest arguments, it often misdirects. It is an apologia piece.
> Canonical did this mistake in the past with Launchpad and it was for nothing
Canonical did many more, more serious mistakes with Launchpad. Yes, PPAs were a dead-end, but for other reasons, that are for a separate debate.
> The store code seems to be a mess and you can't just drop the code and GitHub and you are done, it is a lot of work to refactor the code.
The code is a mess from the start. Somebody didn't do their homework what their users will be interested in, but only what they are interested in (see gatekeeping above).
> Same for the client, it is a of work to correctly add the feature to support multiple repositories, if it was only 1 day work somebody would have forked the client, patch it and published it already.
On the client we can see that they tried to do MVP, be first on "the market" and capture it. It is full of user visible annoyances that are not going to be solved (polluting mount namespace, visible snap dir in user homedir, no dedup, etc). They did quick and dirty job to be first, instead of doing it properly, like the Flatpak guys did.
> But the reality is that this costs more
So drop the things that make it complicated, such as routing separate stores through Canonical...
The reality is, that this is just an excuse to do what Canonical wants, not what their users want.
I don't see Linux users demanding snaps in large numbers so it seems to me that is some lay that if I create a Linux related project I need to first ask all the linux users and all the distros for approval and implement and support all of them.
I do not remember GNOME or systemd listening to what users want, they had a team with a big ego and implemented their vision.
If I would be a billionaire and I want to create my own DE or distro I would never waste time to make sure my stuff gets approval from people that think they know things because they follow steps from a wiki or read some blog post about stuff. I would respect the GPL and if some dude wants to add a feature he can fork it and add it, I would be focused on the goal (the goal would not be to destroy other distros or projects or force my vision on them)
> I don't see Linux users demanding snaps in large numbers
For me, Snap is the best package system.
When I publish new version of Wekan Snap, it's updated very fast to those 8k servers around the world where someone has installed Wekan.
Canonical has paid all that huge bandwidth, server and admin cost to maintain snap build and download servers. For me, publishing Snap version is free.
When Snap is updating on some server, there is very short amount of downtime, and then Wekan is up again.
Wekan Snap is in strict sandbox, and can not write to directories outside of /var/snap/wekan/common.
Compared to Docker, Docker updates much slower and uses much more disk space with all those layers.
Compared to Flatpak, Flatpak uses much more disk space.
> Compared to Flatpak, Flatpak uses much more disk space.
Flatpak has content-based deduplication via ostree. Snaps are squashfs images. They both have a concepts of shared common parts, runtimes.
They both take about the same space on disk. However ostree has a hardlink farm for it's content store, so if you do not know how it works and how to measure it, you might have an impression that it takes more space. It does not. IN addition, if you have multiple applications, anything they have in common at file-level granularity will get deduplicated.
Sorry, what I meant is that people that don't run Ubuntu don't seem interested in having snap by default in their distribution , at least I only see the voices shouting that deb is enough or flatpack is better. So in the end if I were Canonical leader I would say fuck them and now put my few developers on implementing a different store for the other distributions.
> I don't see Linux users demanding snaps in large numbers
There is a demand for a way to package and distribute applications independently of distribution and with sandboxing. Hence Flatpak, who are doing it right.
> I need to first ask all the linux users and all the distros for approval and implement and support all of them.
Did you ever worked with some folks doing system-level work? Adoption by distributions is definitely a concern for them. They don't want to work on something that no one would use and by being mostly independent, they can not cram it into distribution as Canonical is doing. Even Redhat people have to prove worth of their solutions.
> I do not remember GNOME or systemd listening to what users want, they had a team with a big ego and implemented their vision.
Because you never met them. Both teams do listen to what users want. Just do not mistake what users want with what a few loudmouths want.
> If I would be a billionaire and I want to create my own DE or distro I would never waste time to make sure my stuff gets approval from people that think they know things
Even if you had your own distro, you would be probably interested in it's adoption, right? It is a bummer to have a distro that nobody would want to use ;).
But I understand; it is a different thing to work with people who have an idea what they are doing, vs. listening to bikeshedding.
However, this is not a case of snap and Canonical. They are doing the wrong thing, because they have different objectives than the ones publicly communicated, and they know it (they communicate technical hows, but not business whys).
Because I am developer and I have experience with different technology and with different kind of users, no I would not try to get the largest number of users. What we need is more science and data, I would not do X because some designer things looks cool or some here developer wants to play with some cool tech.
For example a few years ago KDE Plasma had a maintainer in charge with a big ego(yeah, KDE has some people like this not only the other DE), users wanted an option to disable a thing on desktop (the cashew https://www.omgubuntu.co.uk/2019/10/kde-kills-desktop-toolbo...) .We had patches but (similar to GNOME devs) the maintainer ego was too big and it took years for normality to happen.
IMO is snap is bad or this feature is really needed it will happen or snap will die, same for tray icons in GNOME they will add them back when the big ego guy that is against it will find something else to focus on(not sure we will ever have UX study and use data for decisions)
Tray icons in Gnome are not about big ego guy. The issue is entirely different, rooted in different problem.
Systray appeared for a first time in Windows 95, which had no notion of services, say nothing of user services. Here it started to be abused, at first for quicklaunch scenarios (where an app preloaded itself into ram or disk cache at least at login, so when the user clicked the app icon, it launched "quickly"). Later it started to be abused by apps to make them difficult to turn off (Skype is the classic example). The respective developers redirected all the WM_CLOSE to just minimizing the app, removing it from taskbar, but kept the app running and restorable in the systray. Many users were frustrated that they are not able to quit the app (if they noticed), power users just nuked them from the taskbar.
Models like this are not acceptable on a modern desktop. Even if some people are used to this and cannot imagine something different.
It was Android that came with solution for this: split the app into the UI (Activity in Android parlance) and the background task (the service). The background task cannot interact with the user directly, but it can communicate with the UI (if/when it is running) and send notifications when it needs attention, so the user can launch the UI for ecample (one nice thing is that notifications persist even if the process originating it crashes). It can be launched by the system at login (i.e. as systemctl user service on linux desktop), or it can be launched on demand by the UI part, and it can be similarly terminated. The UI can do many things, but one thing it can't do is to keep it's process running once the user closed all it's windows (Gnome-on-Wayland started to check for this and complains to the user about that, offering whether to quit the app or not).
This model works greatly for apps that genuinely need to run in the background, check for things and on some event notify the user (mailbox monitors, instant messaging, file sync, media players etc). It does not work for apps, which want force themselves on the user, do something that the app developer wants but user doesn't (i.e. Skype) or those who have no reason to be in a background, but they do anyway (VLC for example, where users click on video, but since it is already running and it allows only for one instance so nothing happens then, frustrating user again). The split into UI/service is also positive in that the resources needed to run while in background are lower; all the assets the UI needs are only loaded, when the user launches the UI part.
Meanwhile, AppIndicators & co extensions are there until the developers migrate their app to the newer model. Which takes a while, Rome wasn't built in a day either.
This logic is fine for GNOME apps, sure use client side decorations, use background jobs, remove advanced options etc but GNOME can't force non GNOME apps top follow their big ego designer vision, there are also old application and games that can't just implement client side decorationbs because GNOME want to force their shit.
So I see people like you that complaint hat Canonical did not implement X in snap(like third party repo) but defend GNOME that is rejecting keeping functionality with other applications with excuses like
- GNOME devs are too busy to read and accept your patch (maybe Canonical devs are busy too)
- GNOME devs don't want to add your patch that adds a checkbox to give you an option because is to much work maintining it (maybe Canonical thinks is too much work to maintain that third party repos patch)
- GNOME devs don't want to support other applications and toolkits, is only the GNOME way or if you don't like it use KDE (then same should apply for snaps, you don't like it use flatpack)
- systemd has some Google DNS hardcoded int he code, if you don't like it ask your distro to change it or recompile it yourself (then please apply same for snap, ask you distro to recompile, patch it , etc)
- there is a large number of GNOME users that want tray icons, then GNOME response is "you are stupid, we know better, use something else or patch the code with an extension - you should apply same excuse for Canonical)
Hope I made it clear, there are some double standards here, so we should be more consistent, probably both snap and GNOME are less user targeted and dev/designer ego driven since nobody pays for the product the user requests are ignored.
I will use just one example to demonstrate that your double standards are not really double:
> - systemd has some Google DNS hardcoded int he code, if you don't like it ask your distro to change it or recompile it yourself (then please apply same for snap, ask you distro to recompile, patch it , etc)
Systemd has a default fallback hardcoded to Google DNS. Yes, distributions can compile in another default fallback. At the end of the chain, someone had to pick something, that works. The users can configure whatever they want - no need to recompile anything, just config file - and once they configure something, the default fallback is no more, the option picked by the user is used.
Canonical's snap store, on the other hand, is not a default fallback. It is only one option allowed, and if you don't like it, tough luck.
Wrt. the discussion, there are tree possibilities:
- you don't understand the nuance that makes the difference between those two situations;
- you willingly misrepresent two different situations as one;
- you don't understand what you are talking about.
Anyway, in all three cases, there's no point to any further discussion.
On the rest, available doesn't mean anything, it is a low quality packaging. On any distro that doesn't run apparmor, the snap sandbox doesn't work, for example. That's not something you would use outside of experiments.
Linux Mint put in an apt rule to avoid it installing as a dependency of other apps. This decision was triggered by Canonical's announcement to stop maintaining the Chromium deb package. Linux Mint was pissed they had to either switch to the Debian version or to the Snap, so they made some fuss.
I know of at least three Ubuntu derivatives who have Snap preinstalled and every snap I maintain has between 5% and 10% non-Ubuntu users.
> On any distro that doesn't run apparmor, the snap sandbox doesn't work, for example. That's not something you would use outside of experiments.
Given that every AppArmor thing Snap needs is in the upstream kernel, any distribution with a fairly recent kernel an userspace has full confinement. The main limitation is that Snap can't run when SELinux is running at the same time. Canonical is working with a bunch of other people on LSM stacking in order to make that work.
> That's not something you would use outside of experiments.
That might not be something you use outside of experiments, but don't generalize statements like that.
> I know of at least three Ubuntu derivatives who have Snap preinstalled and every snap I maintain has between 5% and 10% non-Ubuntu users.
Pop!_OS doesn't install snap by default, but it does Flatpak.
> Given that every AppArmor thing Snap needs is in the upstream kernel, any distribution with a fairly recent kernel an userspace has full confinement
Just because it is in upstream kernel, does not mean it is used by other distributions. Only that it can be used. But so can be SELinux, which is also in upstream kernel.
> The main limitation is that Snap can't run when SELinux is running at the same time.
Exactly, different distributions have chosen different LSMs in the past and they are going to stick with them. RHEL or Fedora are not going to switch to AppArmor anytime soon.
> Canonical is working with a bunch of other people on LSM stacking in order to make that work.
Good for them that they are realizing this is a problem. Flatpak was developed from the start with the different LSMs in mind.
> That might not be something you use outside of experiments, but don't generalize statements like that.
> Probably you are thinking there should be atext file in your home where you could add new signature to be sued, but that could be a security issue so probably needs to be something more safe,did any serious patch was sent to improve this and was it rejected or why we expect Canonical to prioritize this over other issues?
If someone can replace that text file, they can replace the binary the key is compiled into.
Apt has utilities for managing keys, You could keep it simple with plain text file that anyone can read and write but some dude can paste the wrong thing and corrupt the file, or you need to update the format and old scripts would corrupt the file etc. It can be done but as a developer I put in my code constant numbers and paths because there is not enough of a reason to justify the effort to save this values somewhere else. From the article it seems that there is not enough interest from distributions to have their own app store so there is no justifications to prioritize this feature over other ones.
No, this is incredibly bad. Think about the pain we're all going through to rotate UEFI certificates. There is almost no system where this is worth it.
You can always modify the Linux Kernel system call "open" to detect when an attempt to open the certificate is being attempted and then read out your own certificate.
Ideally you'd be able to add the alternative store's public key to the existing snap client. Like you can do with existing package managers to include alternative repos
They deliberately designed the software to not allow that without you forking. I mean, it is trivially easy for Canonical to just put the public key in a file and have Snap read from that key. Instead, they've hard-compiled it in.
Three years ago, I actually had a discussion with them about External Repositories. It's called "External Repositories?" on the forum and has, like, over 200 messages on it. They didn't budge one bit despite our arguments on why this was a very bad idea on their part.
Was there a competent patch submitted about keys and was this discussed in public? I imagine you can't just drop the keys in a random text file, it must be protected like the passwords file.
You still need to protect from getting updated by errors or malicious code. For apt you have utilities to edit the keys you don't edit a test file by hand.
No idea, but honestly I would not spend my time on writing such a patch yet , only if I intend to create a great snap store to rival Canonical , otherwise we have enough working alternatives for all use cases that are not about third party binaries that are reviewed by Canonical.
> Not only that, but this is the biggest omission of the truth: Making your own server is no longer possible because of assertions. All Snaps are now digitally signed by Canonical, so you actually need to have the end-user install a forked snap tool on their system to access the custom repository. You cannot disable this signing - your only way around it is to manually download the snap and install it with `--dangerous` through the CLI. And you won't get auto updates that way.
The blogpost is very clear that each _device_ is made to point to a single store. If you want to point it to a different store, you have to recompile snapd to include your own certificate.
This is why I talk about, for example, Manjaro having their own store. Since they compile snapd, they get to choose what certificate is used and so what store is being used.
It's a right PITA open-sourcing a server component that ties into various back-end business processes, and a completely thankless task.
Not at all surprised they didn't bother. It's not like they're swimming in money anymore, they have to be more careful what they spend time and money on.
"Although they invested significant resources in open sourcing Launchpad, there is still only one instance of Launchpad running and they have not received any significant contributions from non-Canonical employees."
For what it's worth, there is another Launchpad instance in existence: https://quickbuild.io/
From personal experience, I found it functionally impossible to get a local instance working when I tried a few years ago. I applaud someone else managing to pull it off.
Launchpad is also objectively more complex and hard to use than alternatives, and subjectively I'd say it's ugly and looks dated. Launchpad isn't probably something people want to use because of product design, and thus there's not much activity around it outside of Canonical.
I think it would be silly to assume that a separate project that is liked wouldn't get traction outside of Canonical because of their experience with Launchpad.
> Launchpad is also objectively more complex and hard to use than alternatives, and subjectively I'd say it's ugly and looks dated
Launchpad is _really_ old though. When it was open sourced, there were very few alternatives and those were often even more complex and harder to use.
I think part of the reason why it was almost never used was because it does so much: project management, bug trackers, build service, package hosting, ISO and distro building, ..
If you only need a few of these features, you are much better of using alternatives with a smaller scope.
Launchpad actually predates Ubuntu :) A lot of the methodologies and practices used to develop it (both good and bad) didn't even get names until years later. I even hear monoliths are becoming fashionable again.
> A year later, in the Ubuntu 20.04 package base, the Chromium package is indeed empty and acting, without your consent, as a backdoor by connecting your computer to the Ubuntu Store. Applications in this store cannot be patched, or pinned. You can’t audit them, hold them, modify them or even point snap to a different store. You’ve as much empowerment with this as if you were using proprietary software, i.e. none. This is in effect similar to a commercial proprietary solution, but with two major differences: It runs as root, and it installs itself without asking you.
> First, I’m happy to confirm that Linux Mint 20, like previous Mint releases will not ship with any snaps or snapd installed.
> Applications in this store cannot be patched, or pinned. You can’t audit them, hold them, modify them or even point snap to a different store.
This is not entirely correct. Distributions can use a "brand store" to have complete control over which packages their users get.
> You can’t audit them
Many Snaps contain a build manifest in `/snap/snap-name/current/snap/manifest.yaml`. This manifest contains a log of everything that is used to build the package. For snaps built on Launchpad, this is automatically enabled and includes a link to the Launchpad build log for that snap. This is one of the build logs for the Chromium package, for example: https://launchpad.net/~osomon/+snap/chromium-snap-firstrun-n...
Using Launchpad as the source of truth, you can be 100% certain that the snap you're running is built from the source it presents. This is the same infrastructure that builds and provides trust for Ubuntu itself.
The snapcraft build service uses Launchpad in the background, so any snap built using that can be audited just like regular Ubuntu packages. Snaps built on third-party infrastructure can enable this manifest using an environment variable.
I don't know why Linux Mint is spreading such misinformation, but this is harmful.
Users have many ways to hold snaps temporarily. If they want to hold snaps indefinitely, then they should install them using he `--dangerous` flag. That won't give them any updates, but I'm guessing that's the point. They can always get the latest version of the app manually using `snap download snapname` and install it using `snap install snapname.snap --dangerous`.
> modify them
Just like a `.deb` package, users can unpack a snap, modify any file, repack and install the modified version. Alternatively, you can unpack the snap, modify any file and install the directory using "snap try". You can then modify files and rerun the app without having to reinstall it for every change.
> or even point snap to a different store.
The signing keys are baked into the Snap binary but the URL is configurable via an environment variable. So users can change to a different Brand Store without any issue. If they want to use a store with a different signing key, they have to install a different version of snapd which is compiled with the signing keys of the other vendors.
This isn't an insurmountable problem, but it is less flexible than apt, yes. Although I think "apt remove snapd; apt install snapd-fsnap" is still pretty easy..
The security issues SUSE raised were actually fixed, but the Snap developers did not respond quickly enough with the info that the issues were fixed.
> I'd like to point out that the reason for the closure was not a failure at addressing any of the raised issues, but a failure to reply to either of the requests for a status update in July and September.
Does anyone else feel like Ubuntu has lost a lot of its momentum over the past few years? I don't hear about things they're doing nearly as often anymore.
When I fist started using Debian, it was a bit of a potato. It improved over time but still felt a little wooden. Ubuntu brought a lot of people back into the Debian ecosystem, ensured that it improved constantly to the point where it's a corporate power buster, and I feel like the next release will hit the bullseye for sure.
While most of my systems are on Debian, the installation experience could really use some work, especially the partitioning part. I've had several occasions when I had to resort to fully manually partitioning my drives because of some hickup.
Even in Debian 10, swap is enabled by default in the installer (maybe depending on system RAM) and the encryption guide will refuse to proceed in that case. Only way out of that is switching to another TTY and disabling it manually. The situation and docs around networking and DNS are a bit of a mess with regards to the not-yet-complete migration to systemd services from ifupdown and resolveconf. At least it's not defaulting to networkmanager.
At the same time, selecting UTC as timezone from the installer is weirdly hard to find, a lot is predetermined from "select your location".
It's like it's still designed with the use case of non-technical desktop users from 15 years ago in mind
I'm suspecting very few Debian lovers actually use the installer and instead prepare their own installation with self-built images ready for flashing or upload. It says something that Arch Linux, which doesn't have a guided or graphical installation, is a smoother installation experience for a user new to the distro. The wikis for each are night and day.
Don't get me wrong, I love Debian too but there's some areas where it's unnecessarily confusing for those who haven't grown accustomed to its quirks over the years.
Rather than "I love Debian", I think its core strength making the preferred choice for most situations:
* well-maintained and large user-share
* lowest common denominator for most deployments; regardless on if it's cloud, SBCs, bare-metal servers or desktop, Debian's default installation has a base set of packages for any scenario; just enough that you don't miss anything vital but not making opinionated choices when there is one to be made. For anything I use it for, the only additional things that go on all machines is userspace utilities like vim/tmux/htop. I never felt a need to remove anything.
You install Debian Testing, but you configure your sources.list with the actual distro name (today: bullseye) instead of testing. Some of the installer media even do that for you: https://www.debian.org/devel/debian-installer/News/2020/2020.... Just keep updating your distro and when it gets released you will be on Debian Stable.
What's wrong with using test in this context? If I'm running LTS, which for me is server only, I don't want or need newer kernel or x updates. Additionally, if I'm utilizing a DE I generally don't want LTS.
They keep spending effort/money on projects competing with something Red Hat's doing, then getting trounced by Red Hat—sometimes politically, sometimes technically, sometimes both—but continuing to spend effort/money long after it's clear they've lost. I have to imagine that's a big drag on them.
It's not just a release date issue. It matters (to me, at least) that Canonical was doing it's thing, RH invents something shortly after that trounces Canonical, Canonical makes futile attempts to save the ship before finally dumping the project.
For example, Upstart. RH invents SystemD, Canonical stays with Upstart until, like, 2017 before giving up.
systemd's widespread adoption was somewhat controversial and took years. I realize Upstart didn't receive widespread adoption, and Canonical has historically liked to start its own products (Mir and Unity as you noted). But the discussion around systemd was not "everyone except Canonical thinks it's good", and Upstart had been in development for several years.
Wasn't it Canonical creates Upstart, switches Ubuntu to it, RH switches to it; a few years later, RH creates SystemD, RH switches to it, Ubuntu switches to it?
Nah, they could have maintined Upstart integration like they did before. The number of packages that need init system integration is tiny compared to the whole of Debian.
> Red Hat did use upstart before adopting systemd FWIW
RHEL added support for Upstart as a hybrid with SysV, however it was never heavily used and not by most of their own packaged RPMs. Spin up a CentOS 6 server, install a bunch of daemons then go compare /etc/init/ to /etc/rc.d/init.d/.
But the init system was Upstart. Upstart, like SystemD, can start daemons from SysV scripts. At the time, the majority of distros still used SysV or SysV-like init systems. As long as the newer systems had backwards compatibility with the SysV scripts and the newer functionality were not needed, why would upstream switch to the new format?
SystemD has managed to gain near-ubiquitous usage, to the point where plenty of upstreams now only ship systemd unit files, but I would argue that this change was first initiated by the introduction and adoption of Upstart in both Ubuntu and RH.
We tried to use Upstart properly in Fedora. We spent three Fedora releases trying to do that (Fedora 9, Fedora 10, Fedora 11, and Fedora 12). We gave up by Fedora 13.
Well, they spent several years on Unity8+Mir, and then abandoned it went that went nowhere.
They then pivoted to Snaps, where they are so proud of themselves on their echo chamber of a forum that they can't see that the end users are less than impressed.
Honestly, I've stopped paying attention because Ubuntu is great at building technology with critical, mind-bogglingly obvious flaws that sink the project every time.
That OS of Ubuntu Phone, Ubuntu Touch https://ubports.com has rapid progress as a community project, it works very well on PinePhone, and there are also other Open Source OSes for PinePhone.
No juju seems to still be going strong. Mír is still going strong - though it is a Wayland client now, even Ubuntu Touch is still going, though as a community project rather than a Canonical one.
The LXC/LXD folks at Canonical are doing great work with adding kernel features for namespaces etc.
I think the "problem" is that Canonical has figured out - correctly - that all their worthwhile stuff needs to be done upstream of Ubuntu. Even their packagers know to try to get things in Debian first and to reduce diffs when they can. Unfortunately, that means that the unique Ubuntu secret sauce is all the weird stuff that no upstream wants, and if you're in no hurry to upgrade, you may as well run Debian.
This is pretty good for the world but I worry about the sustainability of Canonical as a company. I'm worried they'll end up like Mozilla - senior management finally realizes that doing excellent technical work in upstream communities isn't a net positive for their bottom line and has no idea how to remediate that other than layoffs.
I got so excited about LXD ... until I found out it requires snap.
My first "cup of ubuntu" was well over 10 years ago now and I've always wanted to see it succeed. It's repeated squandering of so much potential.
Edit: I was referring to LXD usage on Ubuntu-20.04 and versions going forward (I should have made that clear in the post). Sure, snap isn't technically required since you can "change distros" or "compile it yourself." Those just aren't great options IMHO (and doubly sad since I'm using the distro of the company that makes the product and the better experience is on another distro).
It does not require snap, it is just the default way to install it. You can compile it yourself, or check the repositories of your distro of choice otherwise.
I think Arch has it in their repositories, for example.
Sure, but that does not mean you cannot run LXD on Ubuntu without Snap.
I compile new versions, test it out in staging, and then ship the update to production servers, which is currently a mix of Ubuntu, and OpenSUSE.
Do not get me wrong, I do not like snap (or flatpak for that matter), and would much prefer if LXD was available through a PPA, or the default Ubuntu repositories, but LXD itself does not require snap, it is just the default distribution method.
As a desktop system maybe slightly, although the change is probably hugely overstated because of very loud subsegments of online communities. Most developers I've seen running linux either run Ubuntu or Fedora and I've rarely ever seen anything else, and that has to be a huge portion of the desktop/laptop segment.
On the server if anything it seems like Ubuntu has gotten stronger.
I think they stopped trying new risky things and focusing on survival, there is a large number of people(not a majority though) that regret the fact we did not get the Ubuntu Phone, and the Qt based DE Unity8.
This is easy to explain if you think of it as a business decision. Why would anyone pay $30000 for a custom "enterprise edition" snap store[1] -- which among other features restores full control of updates -- if anyone could easily set up their own snap stores?
I don't think open sourcing the store would significantly cut their sales. Running a world-wide package distribution network in itself is an incredible cost which many companies are willing to pay for. CentOS did not cut into the sales of RHEL either, for example. It's useful to have someone to yell at when things break.
Also note that there are already alternatives. I personally know a few companies who built their own stores internally and it's actually very easy to do so if you install the apps using side-loading.
I do like the snap concept, but some implementations are just not that great. For instance, kustomize as snap doesn't let you read anything outside that is not in the snap fs. Not even in --classic mode. A similar issue happens with nextcloud which is good on paper, but if you tweak something inside the snap fs it will be overridden in the next update(expected behavior); however, for services that require mutable configuration there needs to be a way to preserve that data (I tested that nextcloud behavior more than half a year ago).
> For instance, kustomize as snap doesn't let you read anything outside that is not in the snap fs. Not even in --classic mode.
Classic snaps have complete access to the host filesystem. It is up to the publisher, however, to request classic confinement. If the publisher creates a `strict` snap, users cannot override this (because strict snaps would break when their container and dependencies suddenly disappear)
> for services that require mutable configuration there needs to be a way to preserve that data (I tested that nextcloud behavior more than half a year ago).
Snaps have access to multiple locations to store mutable data like `SNAP_COMMON`, `SNAP_USER_COMMON`, `SNAP_DATA` etc. Snaps can optionally also access regular directories.
Afaik, Nextcloud stores its configuration in one of those directories. I think what you experienced might be an issue with the NextCloud snap because snapd itself already supports this.
Lol as if there is oodles of money to be gained from hosting a store like that on Linux. We can see all these steam game publishers desperate to release games to a community that is rather famous fo being cheapskates.
Also, snap existing doesn't hinder flatpak or appimage or debs or rpms.
Since when are linux users "famously considered cheapstakes"? I was under the impression it's quite the opposite!
Of course, all generalizations are just that. But if anything, the linux gaming community has many passionate people more than willing to pay for games on their platform.
Most game developers don't port/build for linux due to the lower market share. A way simpler explanation that obeys basic market rules.
OTOH I know more than a fair share of people who exclusive pirate all the games they play on Windows. I also don't know many people who bought a license to windows itsellf.
> release games to a community that is rather famous fo being cheapskates
Wrong, if you look at game sales that are donation-based, like the humble bundle (they used to publish average price per platform but I haven't found recent data), you'll see that Linux users actually pay significantly more (voluntarily).
They aren't going for a model like Steam where they sell software to end-users. The idea is to get enterprises hooked on using snaps to distribute their software internally, and then charge them for private repository hosting.
Then fair play for Canonical to get revenue streams from enterprise?
From my perspective the primary motivation seems rather simple in minimizing the maintenance costs/testing of having to publish software for 5-6 distributions.
That to me seems more plausible than evil mustache twisting capitalist reasons.
Highly doubtful they'd make any material amount of revenue from selling software on Linux since a lot of Linux's user base prefer to only use free and open source software.
I tried not to interpret the article in the worst possible way, but I failed, it feels disingenuous to me.
I don't think comparing ppa's to snaps is a good comparison.
ppa's have the disadvantage of potential dependency problems which might break things like upgrades, and that has nothing to do with having a distributed store.
snaps solve this and that has nothing to do with a proprietary single company controlled store.
I'm sorry you feel this is disingenuous. I really tried to explain why there is only one Snap Store from a neutral point of view. Let me know if you have any tips to make it more genuine.
You are right that this has nothing to do with the dependency problems Snap solves. Snap does many things differently but this article only focusses on one to keep it relatively short.
This article doesn't try to present a comprehensive comparison and definitely not between Snap and PPA. The article only focuses on the question why there is only one Snap store. The answer to this question has nothing to do with Snap bundling dependencies.
My main reason for disliking Snap is the fact that it allows anybody in the world to publish a package with minimal moderation. This completely undermines the inherent trust that system package managers should have.
When installing critical system packages, I want to be absolutely certain that these are legitimate/official, and that even if I make a minor error in typing a command, I won't inadvertently install some sort of typosquatted fake version of the package.
When using Apt with the default repositories, this isn't a problem at all, as only known, trusted packages are available. In other words, there's no chance of someone publishing a fierfox or apahce2 package to try to typosquat someone.
I don't even want to talk about the forced automatic updates either... these make it essentially impossible to have a stable/reliable system for specialist use cases, e.g. browser testing, bastion host, build environment, where control over updates is very important.
On the sandboxing - it's good in principle, but rarely seems to be implemented in a truly meaningful way, as ultimately once you have home drive access, you don't even need to worry about escalating privileges as everything valuable is probably in your home area! There's an xkcd about this somewhere...
> My main reason for disliking Snap is the fact that it allows anybody in the world to publish a package with minimal moderation. This completely undermines the inherent trust that system package managers should have.
Where do we get more maintainers? Sometimes in my development I release one new Wekan version per day. Canonical's Snap build servers download Wekan source code directly from GitHub, it is very transparent.
> On the sandboxing - it's good in principle, but rarely seems to be implemented in a truly meaningful way
Wekan Snap has strict sandbox, so code can not access any other directory that /var/snap/wekan/common. So in case someone would find exploit for web service, it can not escape sandbox. It is very important.
Pollute your mount list and make your software start way slower and take up more space. Also sometimes break or require intervention for basic operation due to security restrictions on it.
The benefits are supposed to be that they're more secure, more portable, and don't junk up your system with files strewn everywhere.
Personally I'm waiting for a better solution than anything we've seen so far for the first of those benefits, and would prefer a cross-platform user-oriented package manager like Homebrew or Nix (but better-supported-on-Linux in the former case and with a UX less like studying for a math exam in the latter) for the other two. IMO every solution to these problems on Linux desktop currently sucks to one degree or another.
I admit I've not checked and have just assumed the package list is both ideologically- and mindshare-crippled, due to the GNU on the name and the seemingly low rate of use, respectively.
Nb I think free software goals are great and all but if I need Slack for work or need my proprietary wifi card to work using a binary blob then I f#cking need those things, and it's nice if my package manager can install them for me. It's also nice if it's got enough eyes on it that I essentially never run into a broken package, even when installing kinda-obscure things.
You can use Guix on distros with impure repos, not just the GNU Guix system, and use the host distro's packages for proprietary software (if you can't just use Slack's web client, or use a laptop with a freedom-respecting wifi card / a little dongle / ethernet).
Note that the sandboxing security theater of flatpak and snap etc is largely just that.
You can have very secure deb packages by having default-deny apparmor rules. This is basically how Android works, it just asks you to grant the permissions in real time (and uses a frankenmix of custom Google fu Android Java API and selinux).
You can also use cgroups to control kernel feature access like devices, networking, peripherals, etc.
It comes from both angles though. Having more software provide an apparmor profile (transpilable to selinux policies, etc) and having GUI-integrated permissions and cgroups control functionality in both software stores (via appstream? it seems to be the common thread) for both at-install and at-runtime permission grants.
The problem of course is inertia. Flatpak isn't even compelled to sandbox and its sandbox is not nearly as comprehensive as one would want when running untrusted software, not just potentially exploitable buggy trusted code.
>You can have very secure deb packages by having default-deny apparmor rules.
Nobody would do this on a desktop because it is massively inconvenient to go through every single app you want to run and debug which app armor rule that it's violating. Even when running a service with a pre-written app-armor profile you usually have to spend a while to figure out what the hell went wrong.
Flatpak's sandboxing is a complete joke and is entirely voluntary. Snaps have sane and granular permissions interfaces that you can easily toggle on and off. Canonical is actually enforcing auto-connect rules for the more potentially dangerous ones in their store. If you want to get a classic confined app in the store, it actually has to be approved by their security team.
These things are GREAT for security, which, to be quite frank, is a complete fucking disaster on Linux desktop. X11 is a massive security hole, no real mandatory access control, no sandboxing for apps, local privilege escalations out the wazoo, a quadrillion open security bugs in the kernel. Sure, you can try and set these things yourself, but that relies on the USER to properly configure these things, and if you don't know exactly what you're doing and screw it up (and there are no reliable, consistent guides on how to do these things), then you're just as insecure as you were before.
We're just fortunate that Linux on desktops aren't popular enough to be targeted, because we'd just be getting constantly owned thanks to this massively outdated security model. Windows is actually doing the security model a whole lot better these days, but their popularity and their tendency to implement them poorly and with bypasses to preserve backwards compatibility kind of cancels that benefit out.
It's a real shame that snaps have not taken off. If flatpak wins, and they don't massively overhaul the damned thing to actually add some semblance of sandboxing with permissions controlled by the user, then we're doomed.
Snaps have already taken off in a big way. Many individuals, companies and enterprises have selected Snap because of secure sandbox and automatic updates.
It is just that FUD from Linux Mint causes many HN articles with FUD.
Yeah, but Linux Mint still has snapd removed by default and refuses to support it; so do most of the other popular downstream distros (PopOS, elementaryOS) for desktops. Together, they have way more desktop share than Ubuntu does.
Sure, there are other distros not based on Ubuntu that technically support snapd, but I have yet to encounter one that does it well; Manjaro's still breaks on many snaps, Fedora's snapd stopped working completely for over 3 months and nobody noticed. Pretty much nobody but Ubuntu users actually care about snaps; Flatpak has way, way more support and penetration than snaps do, and it looks as if that isn't going to change any time soon due to Canonical's refusal to open-source the snap store. I don't think snaps have much of a future as a widely-supported method of package distribution if this doesn't change.
Also curious. I'm waiting, or even begging for something like flatpak on mac os just due to the number of times pip/npm/python in general has broken due to homebrew.
• Nothing available that's as lightweight and good as Apple's "office"-type suite.
• Nothing as all-around good as Preview (yes, seriously).
• Worse battery life, partly due to worse OS optimizations and partly due to not having Safari available. Solutions to this in Linux that actually yield good results usually involve hard-capping performance at a pretty low level, IME.
• Little things like screen recording and screenshot capabilities not being as nice for basic use, out of the box.
• AFAIK not only is the default English keyboard on Linux far worse at the task of writing in English than that on macOS, there's not a single alternative layout that's close to as good as the Mac default. Yes, I'm dead serious about this, and frankly it's friggin' weird they'd have such a significant advantage on something like that in the year 2020, and I really wish other operating systems would catch up because I do not like being forced to use Apple products just so my experience composing documents in English isn't bad. I do not understand how they still have such a large advantage here, but somehow, they do. Totally baffling.
• If you do anything serious with software-UI-related graphic design, or work closely with people who do, you pretty much need a Mac because odds are good you'll be using at least some mac-only software or have things shared with you that work only in mac programs.
• You lose a variety of time-saving integrations with i-devices, if you're someone who takes advantage of those.
Look up how you type: those • characters I used in my post, an m-dash (—), a c-cedilla (ç), a german double-S (ß), basic accented characters for various Western European languages (we have many, many loan words and phrases from those in English—façade, résumé, áñ∂ §ò öñ), punctuation for French and Spanish (no, you don't strictly need these for English—but in practice, depending on the register you're writing in, actually you do), major world currency symbols ($, £, €, ¥, ¢), basic mathematical symbols and mathy Greek letters (≤, ∑) and so on.
Look up how you do that on US English keyboard layouts in macOS, Linux, or Windows. Then check AltGr and US International alternative layouts for the latter two. Marvel at how macOS manages to crush all those options in usability with their default, without affecting the experience for someone who doesn't need those at all. Join me in wondering how no-one else has at least matched them on this, which is not some recently-added feature in macOS, but is practically ancient in computing terms.
> Look up how you type: those • characters I used in my post, an m-dash (—), a c-cedilla (ç), a german double-S (ß), basic accented characters for various Western European languages
No thanks! I already new how to type — (Compose - - -), and I was able to guess how to type the others simply by trial and error in less time than it would take to duck/google how to type them:
Compose + . + - → · (close enough for me Edit: Compose + . + = → •) ┃
Compose + c + , → ç ┃
Compose + s + s → ß
Compose + e + ` → è ┃
Compose + e + ' → é ┃
Compose + ` + e → è ┃
Compose + ' + e → é
Bonus:
Compose + - + > → → ┃
Compose + = + / → ≠ ┃
Compose + / + = → ≠ ┃
Compose + : + ) → ┃
Compose + L + L + A + P → (Edit: HN seems to have stripped out the last two, a smiley face emoji and the \\// / Star Trek/Live Long And Prosper hand. So I guess the compose key can type more characters than HN even allows.)
ß°ηυ5 þē 5€¢•ηⅾ:
Pressing the alt key reveals underlines under selectable gui elements. Press any letter that's underlined (while still holding alt) to activate the element (menu item, button, etc.). Try this in every GUI you use regularly and then race a macOS user stuck with reaching for a trackpad or mouse everytime they need to click something. Јоⅰη ⅿе ⅰɳ wօɳԁеrⅰηg Һօw ɳօ‐օɳе аt Аррⅼе tҺօυgҺt tо аⅾԁ tҺіѕ, wҺⅰϲҺ іѕ ɳօt ѕоⅿе rесеɳtⅼу‐аԁԁеԁ fеаtυrе ⅰɳ ԌΝU⁄Ⅼіɳυⅹ, Ьυt ⅰѕ рrасtіϲаⅼⅼУ аɳсіеηt ⅰη ϲоⅿρυtⅰɳg tеrⅿѕ (¡ΝаѕtУ Wⅰηԁоwѕ, рrеҺⅰѕtоrіс Wіηԁօwѕ 95, Һаⅾ tҺеѕе υηⅾеrⅼіɳеѕ аⅼwаУѕ ⌵іѕⅰЬⅼе!).
Great! Linux, I assume? goes looking OK yep, I'll enable it and start re-training myself on that, for when I'm not on mac. It's not quite as quick for most of the things I need frequently, but seems tolerable.
Remaining questions: 1) why would the default be bad? Like, why ever would one make a default bad if there are non-bad options? Especially for something everyone should want to be good on a computer, like composing text in the user's language, and 2) ugh of course even once enabled the default compose key is bad (there's a theme here) and both enabling it and configuring it depends on which WM/DE you're in, Wayland vs. Xorg, and so on, and from some googling it's even possible to run into inconsistent behavior between different friggin' GUI toolkits under the same DE.
OK that second one is a comment, not a question. Still.
I'd recently googled the problem of (specifically) typing an em-dash on Linux, and just tried it again, and sure enough this solution doesn't come up until the sixth hit, in an unassuming Google documentation style guide. "Just memorize a 4-digit number" is the overwhelming answer to that question, on search engines, for some reason, despite plainly being awful. Maybe the configuration hurdle and bad default behavior is why that's the go-to solution, I dunno.
[EDIT] none of the above irritation aimed at you, I hope is clear, and sincerely, thanks for the pointer.
1) why would the default be bad? Like, why ever would one make a default bad if there are non-bad options?
Because it is not bad, it just chooses different advantages and disadvantages.
Right Alt could be configured to do regular Alt modifier, or could be configured to enter additional characters (Alt Gr - third/fourth level).
The first case has advantage that Alt-based keyboard shortcuts on right side can be conveniently entered by one hand. That is why there are Shift, Control and Alt on both sides of keyboard.
Second case has advantage that you can enter more characters, but entering rightside Alt-based shortcuts is more awkward.
Conventionally, US keyboard uses the first approach (as there are less need for entering more characters), while many non-US keyboards use the second approach.
Also, why Compose key is not accessible by default? Because there is no Compose key on common PC keyboard (in constrast to some old Unix keyboards). Therefore, Compose key need to 'steal' some existing key, which is problematic, because users expect existing keys to work as expected. I personally use Menu key as Compose key, but other users may have different expectations.
I imagine it defaults to off and the top results recommend the Control-Shift-U<Unicode codepoint> because otherwise people switching from Windows would complain. It’s my understanding that on Windows typing these characters involves holding down Alt while typing arbitrary Windows-specific numbers on your number pad (What happens if you don’t have one? I’ve no idea!) — or more intuitively, opening a new browser tab, Googling the name of the character, copying it with a keyboard shortcut or context menu (I hate leaving X11 and not being able to simply highlight text and middle click to paste!), and pasting.
All distros I've seen allow you to choose a keyboard layout (and I think at least usually AltGr) at first boot, and I imagine this is how most people using primarily non-English type quickly, on any OS. I personally just use a little WM/DE agnostic script to make capslock useful and add a Compose key:
... and ‘xcape -e 'Shift_L=Multi_key'’ to make tapping Shift work as Compose (xcape is installed by a little bootstrapping script I run when I clone my dotfile repo). But this is better for me because I mostly use Compose for little things like curling my apostraphes (Compose + < or > + ' or " → ‘,’,“, or ”), not so many characters that it needs to be perfectly fast. I don’t have a monitor set-up that’s nonstandard in anything but aspect ratio, or use native apps I don’t trust (like Snaps), so I’ve never had a use for Wayland.
> • Nothing available that's as lightweight and good as Apple's "office"-type suite.
I personally create markdown documents in vis (a text editor like vim, but lighterweight — < 4 MB including optional syntax highlighting, and opens instantaneously enough that I use it to edit many of my HN comments, including this one, by pressing Control + I in my macOS-unsupporting browser) and can compile them to html with discount (< 1 MB) or to PDF, docx, reveal.js slides, etc with Pandoc. This is the peak of ‘lightweight and good’ to me. Apps like Abiword (an extremely lightweight WYSIWYG word processor, but I don't even know how tiny it is because it’s built in to the miniscule Puppy Linux distro that is currently running entirely in my RAM — making any traditional OS look bloated and sluggish to be constrained to run off a PCIE SSD at fastest) may better meet other people’s personal definitions of these words. Given the popularity of web “apps” like Google Docs, I think a native application like any in the LibreOffice suite is probably more efficient than most people care enough to notice.
> • Nothing as all-around good as Preview (yes, seriously).
Not having ever needed to fill out a PDF form, I won’t comment.
> • Worse battery life, partly due to worse OS optimizations and partly due to not having Safari available. Solutions to this in Linux that actually yield good results usually involve hard-capping performance at a pretty low level, IME.
“not having Safari available” — there are a million WebKit based browsers on Linux from the extremely Safari-like Gnome Web to Luakit to suckless Surf.
“hard-capping performance at a pretty low level” — Macbooks have their performance capped by cooling systems that can’t sustain high clock speeds anyway
> • Little things like screen recording and screenshot capabilities not being as nice for basic use, out of the box.
Speak for yourself, Simple Screen Recorder has been exactly that, simple and nice, out of the box, for me.¹
> • AFAIK not only is the default English keyboard on Linux far worse at the task of writing in English than that on macOS, there's not a single alternative layout that's close to as good as the Mac default. Yes, I'm dead serious about this, and frankly it's friggin' weird they'd have such a significant advantage on something like that in the year 2020, and I really wish other operating systems would catch up because I do not like being forced to use Apple products just so my experience composing documents in English isn't bad. I do not understand how they still have such a large advantage here, but somehow, they do. Totally baffling.
Like a sibling comment, I don't know what you’re talking about. However, I can say I love typing with my current layout.²
> • If you do anything serious with software-UI-related graphic design, or work closely with people who do, you pretty much need a Mac because odds are good you'll be using at least some mac-only software or have things shared with you that work only in mac programs.
~~ It’s too bad if this niche has failed to produce more portable programs. But I don’t see it as a failing of the OS for the vast majority of users.~~
elementary OS is itself a serious commercial software-related graphic design project. I am not aware of it’s designers using any macOS only software
> • You lose a variety of time-saving integrations with i-devices, if you're someone who takes advantage of those.
I don’t and I don’t see them as things to be taken advantage of by consumers as much as things to take advantage of and consume them and all their hardware-purchasing decisions through lock-in.
[1]: On the distro I use, PrintScreen instantly pops open a screenshot in a lightweight MS P
aint-style editor (mtpaint, which can crop with one shortcut). On ChromiumOS there’s one shortcut for fullscreen shot and one for drag-to-capture (Control + [Shift +] Show Apps IIRC), either one of which triggers a notification with a thumbnail of the shot that can be clicked to open the folder. I've also seen a small gui for taking shots of windows, areas, or the full screen with or without the cursor and/or a specifiable delay (on an old Puppy). I imagine elementaryOS has a very similar method to macOS, but I’ve never bothered to distro-hop to it because after using macOS for a while at school I didn’t miss it at home (it seems like a good OS only compared to Windows).
[2]: Eg. I can type a superscript one simply by tapping <Compose> <s> <1>, where Compose is mapped to tapping my left shift key. IIUC this would require installing third-party software like Karabiner Elements on macOS.
Yeah, I get there are other reasons people choose Linux—I do too, as I find Win10 intolerable as a working environment so if I'm not on Apple hardware, Linux is my only realistic option for getting things done while still being able to interact alright with the world outside my own computer. I just think that's a small set of the reasons one might find any Linux OS to represent a serious loss of functionality, versus macOS. I agree it'd be cool if more major/trendy design tools were cross-platform, but I work with what I've got.
I don't think the default US keyboard layout on mac makes typing superscript unicode easy but it looks like there are a ton of ways to make it easier, by pointy-clicky through the GUI settings to add either of a couple ways of typing them, by editing a keymap file, or yeah, by using any of several programs like Karabiner. That'd actually be a great addition to the default US English keyboard on mac—a toggle for Unicode super-/sub-script modes.
[EDIT] incidentally, have you observed significant power savings with Webkit browsers on Linux? I like Surf and given FF's ongoing shake-ups I'm giving other browsers a look again, for my browsing-on-Linux needs. On macOS I see something like a 15-20% gain in real-world battery life and noticeably-better overall system responsiveness, using Safari over Chrome or Firefox. Is a similar, dramatic effect noticeable in Webkit ports to other operating systems?
Unfortunately I don’t use GNU/Linux on the go enough to give you a good answer. Surf specifically is probably not good for battery life and now I realize I probably shouldn't have brought it up in this context, because it spawns a separate process per window/tab — it’s noticeably slower than any other browser I’ve used. But GNOME Web/Epiphany though does seem lighter than FF from my limited use of it, worth testing. Personally I use Palemoon as a FF alternative without the shakeups and because the Pentadactyl (fork of Vimperator) extension is beautiful, but I can’t vouch for it’s efficiency.
1) Rarely breaks. Worst I've seen is needing a permissions tweak after an OS upgrade, in many years of use. Back when I switched from Macports I switched because ordinary usage kept rendering Macports so goddamn broken that it was easier to nuke its directory and start over rather than figure out how to fix whatever my bold command of "install package" had destroyed this time. I've never seen normal or even slightly abnormal use put Homebrew in a broken state that required manual intervention to fix.
2) Command line UI is alright. This should be a given but isn't always.
3) Can manage proprietary software for me. There's almost nothing I use (maybe actually nothing?) on my Mac that Homebrew doesn't have an up-to-date package for, aside from Apple-provided apps.
4) Enough people use it that despite being a semi-free-for-all community-run thing packages are pretty much never broken, including the proprietary ones. I'm also continually surprised by how often I find some obscure thing with five stars on Github, want to try it out, and sure enough, there's a Homebrew package.
5) Lets you manage your packages with ordinary user-level permissions. No root elevation required.
6) I can clean practically all of what it's done and get back to a nearly-vanilla system (with the exception of some of the proprietary apps) by deleting one directory. My system and GUI will still boot like nothing's happened. IMO keeping your user-level packages strictly separate from the system-level ones, and managed totally separately, is absolutely the right way to go. I didn't realize how much I wanted this until I started using Homebrew on a Mac, after many many years of Linux package management.
7) It leaves system-level packages that it upgrades alone, so doesn't break core system stuff (I would fucking love this to become a norm on Linux—system uses and assumes latest LTS version for stuff like Python, but I can install whatever version I like for my own use, without affecting that whatsoever and without having to use a different goddamn version-management tool for every single language)
8) I'm pretty sure you can have more than one version of something installed at once and there's a command to switch between them (juggling symlinks, probably), but my recollection of this is vague and may be wrong.
9) If you're actually doing multi-user on your desktop/laptop I think it lets you have per-user install directories and active packages—but I don't really know anyone who does this, and haven't known people to share a computer, really, since over 15 years ago. Amusingly enough, iPads, with practically nothing resembling multi-user support, are the exception to this use pattern, as those seem to get shared among members of a family all the time.
Bad things:
1) Requires Ruby (seems like a dumb complaint, I know, but I hate installing an entire scripting language for a single command line tool, and no not all Linux distros ship with Ruby by default, far from it)
2) I dunno if it still does (I'm working mostly on Debian now) but for approximately forever it's had a completely braindead default for how often it auto-updates its package list when running other commands, and you had to go change that every time you installed it on a new machine so you'd not rip your hair out in frustration when your command to install a package was delayed because it'd been a whopping 16 minutes since the last time it checked for updated packages. Why the hell this didn't default to, like, 12 hours or something is entirely beyond me. AFAIK everyone hates the default behavior, to the point that it's become a common jokey-reference among Mac nerds, and I've never been able to figure out why anyone would want it.
Homebrew is solid, and I think you make a good list.
I had it break before with weird cyclical linking issues, but mostly it works good enough.
What I was more curious about is what specifically sets it apart from a solid Linux package manager?
> Lets you manage your packages with ordinary user-level permissions. No root elevation required.
This is the one major difference I'd say is not currently common on Linux. Could be easily solved by keeping package list databases in $HOME - the only reason root is required nowdays is because these are usually kept as 1 copy per system.
> This is the one major difference I'd say is not currently common on Linux. Could be easily solved by keeping package list databases in $HOME - the only reason root is required nowdays is because these are usually kept as 1 copy per system.
As I posted elsewhere, I think it'd actually be very hard to replicate the experience of Homebrew on macOS, on Linux, specifically for GUI programs, because not only is the the Linux GUI (and related multimedia capabilities, for that matter) so much more fragmented than macOS (where all that ships as one big, stable package) but that fragmentation bleeds through to and manifests in one's experience with individual applications. If not for that, yeah, it'd be very achievable.
Its package selection is also a whole lot bigger than most Linux package managers, in my experience. I don't think I've seen a selection nearly this wide since I was a Gentoo user, many moons ago.
The CLI is better than most Linux package managers—some of those are improving, though. Good error messages and "did you mean..." go a long way.
> think it'd actually be very hard to replicate the experience of Homebrew on macOS, on Linux, specifically for GUI programs, because not only is the the Linux GUI (and related multimedia capabilities, for that matter) so much more fragmented than macOS (where all that ships as one big, stable package
Curious about this, since as far as audio goes, as long as there's PulseAudio on the system, all others tend to have compatibility with it in mind.
> Its package selection is also a whole lot bigger than most Linux package managers, in my experience
That could be, but I personally haven't had an issue with the Arch repo + AUR selections, that's tens of thousands of packages.
> Good error messages
Hmm, these are mostly there, I just made a mistake on purpose and got this, (pacman):
error: invalid option: '--search' and '--sysupgrade' may not be used together
> and "did you mean...
Did you mean would be nice. Git has that and it's certainly helpful
> Curious about this, since as far as audio goes, as long as there's PulseAudio on the system, all others tend to have compatibility with it in mind.
There are hold-out ALSA users (even Linux OSS users still around, I think) and I gather people who need well-performing audio for Serious Work (particularly anything latency-sensitive or requiring that multiple streams be mixed at high quality without errors), and have decided to use Linux for it, end up having to replace or bypass PulseAudio one way or another (this info may be out of date, but given PA's history, I kinda doubt it).
I think the Pulse situation is very similar to the one with systemd (they’re even by the same author IIRC). There are some haters, but if you’re a hater-hater than you can just ignore them (eg. GNOME IUUC depends on both Pulse and systemd, and haters of the dependencies who want the DE just have to find their own shims, like Gentoo’s, deal with some error messages (random games and things often give me a lot of audio-related ones, but generally work ok anyway) or just leave for another DE or WM).
Oddly enough, my favorite package management experiences, by far, have been Homebrew on Mac and Portage on Gentoo. Talk about polar opposites.
I do think, unfortunately, that the experience of a very stable base of macOS with Homebrew on top would be nearly impossible to replicate on desktop Linux, because Linux's GUI layer is so intertwined with user software, and is so... uh, "free and libre", I guess, is a nice way to put it. You'd end up with a bunch of copies of KDE and Gnome libs and probably multiple competing IPC buses or god knows what, in no time. Maybe multiple sound daemons stepping on each other. You'd probably have issues like different apps deciding to do scaling or font rendering differently, or who knows what, because lots of "basic features" on macOS are instead choices on Linux.
> A mounted FS per installed application so you can run different versions side by side.
Does it strike anyone else as very telling that people looked at the problem of "we can't install more than one version of an application at a time because of how we structure our filesystem and hardcode paths" and decided that the solution was "let's give each application its own filesystem!"?
ISTM that anything which lets you install multiple versions side by side will require each version to have pinned versions of dependencies, transitively, until it's very close to the dominant node, which ultimately is the OS API, which in turn, done right, is backward compatible.
You can implement that versioned dependency DAG as a database of versioned libraries etc. combined with a version pinning database; or you can simply use disk space, and keep a copy of every dependency and put each in its own subtree, making the version pinning database implicit in the tree construction.
I don't see the latter solution as being intrinsically worse than the former. I don't think there's a real alternative either; the most reliable set of dependencies is the same set as the original binary was built and tested with.
(And of course it's not just libraries, but also configs of various flavours, assets, etc. If you didn't use the subtree approach, you'd need to reimplement it in a shim layer to simulate it, in order to package yesterday's applications today.)
For the record Flatpak solves the same issue without mountpoint spam by using OSTree zo store flatpaks and their shared runtimes. Not only that, it gets full deduplication of everything and diff downloads just by using OSTree.
AFAIK Snap has no deduplication of installed stuff & not even any dhared runtimes. Not sure if it has diff downloads.
I know, what's up with this? I always feel like the systems guys must know something I don't, and that it would be way harder than I'm imagining. But like.. haven't we learned enough in the last 30 years to know that we need something smarter for mutually dependent pieces of software? Why is nobody trying to do this?
I get that Docker, Snap, Flatpak, etc. are expedient in the here and now. But shouldn't we be trying to rebuild everything on top of nix, and then make nix completely transparent to most users?
From my understanding the new packaging is designed to move the control from the distribution maintainer to the developer. So depending on software and distribution you get a range of advantages and disadvantages. You can continue using apt, snap, flatpack,Stram and binaries from tar.gz so we only gained in choices.
Also, moving control from the user to the developer. Snaps automatically update, and until recently had no way to disable this behavior. This is a poor security choice, and means that a stable system cannot be made.
While apt can still be used, many packages were replaced by snaps, completely defeating the purpose.
Most notably the Calculator app. An app which started in less than a second on Ubuntu 16.04 and which now starts in about 4 seconds on Ubuntu 18.04. All so the developer can include copies of the system library dependencies that already exist in the main filesystem. An app that is notable for providing the user with the most powerful capability of performing arithmetic operations on a modern computer.
Snaps have done what countless desktop environments could never do. Make it faster to do math using Google than by using a local calculator application.
> (N.B. Various snaps for ripgrep on Ubuntu are also available, but none of them seem to work right and generate a number of very strange bug reports that I don't know how to fix and don't have the time to fix. Therefore, it is no longer a recommended installation option.)
I distinctly recall getting suggested by 'command-not-found' to install rg from a snap in some version of ubuntu, but I don't remember what version it was.
Just an hypothesis, maybe some Rust apps that can't be compiled with the toolchain in Debian/ubuntu have no pother choice then use snap/flatpack (so maybe at that time in an LTS ditro apt would not have worked)
This isn't it. They just package a version of ripgrep that does compile with their toolchain. ripgrep has been around for almost four years now, and I think the first release was around Rust 1.9 or so. So even Debian will have a new enough version to compile some version of ripgrep.
I used to include installation instructions for Ubuntu Snap, but they were so mis-managed (both from the perspective the maintainer of the snap and of the entire snap ecosystem itself) that they were causing giant headaches. It became such a problem that I specifically put a call-out to it in my bug report template: https://github.com/BurntSushi/ripgrep/blob/master/.github/IS...
Snap has been hugely annoying on multiple axis since day 1.
So you claim the package was in Debian and Canonical removed it to replace it with a snap? AFAIK if you want to push your stuff into Ubuntu you are always strongly encouraged by Ubuntu to put your stuff in Debian,
>While apt can still be used, many packages were replaced by snaps, completely defeating the purpose.
From my understanding there are not many snaps installed by default, and the ones installed by default are Canonical ones not some Solitaire game you found on a piracy website.
The issue with deb is you need to install the package as root. For example I needed a program to inspect a PDF, I found "PDF Master" and it had a .deb installed , but should I trust this program installer? Because I understand how deb works I downloaded the file, unpacked it and run the application without installing it, we need something that allows us Linux users to run random binaries without giving them root and would be cool if we had easy to use sandboxing and firewalls to prevent this apps connecting to who know what.
they bundle all of their dependencies inside the snap, so even if the projects the snap depends on makes breaking changes and you update your system with those changes, the snap will still work. Snaps also update on the fly without user interaction, which some people like and other people hate.
Personally, I think Flatpak makes more sense than Snap. Snap's all tied up with Canonical
I don't think anyone is upset about the ability of a snap app to update without user intervention. It's the lack of choice. You're forced to take updates.
Agreed.
I wasn't saying the 'ability' to update without interaction was something some people like and some people hate. I said they 'do' update without user interaction, etc etc.
I think we're on the same page. Thanks for articulating that bit of nuance.
Provide a recent version of Arduino on debian-based systems ;)
They're mainly useful for applications which either release too fast or too slow compared to regular distributions. The apps contain their own dependencies.
Another advantage is for messy applications. I'd much rather run Wine applications inside of a sandboxed snap than pollute my base system with all the multilib stuff that's required for wine.
Main benefit is the sandboxed app environment. Snaps are also bundled with their dependencies, so you don't need globally installed dependencies to run the app like you do with apt.
snaps are self-contained apps that ship with their own dependencies, so you get software directly from the developer and you'll be on whatever version they ship.
software installed with apt on Debian is fixed until the next release (unless you're on testing/unstable), so that would be the biggest difference.
yeah basically, although 'more complicated' depends on what you consider to be complicated, because another benefit is that you can be sure they run cross platform across any distro that runs snapd, and you don't actually end up replicating all dependencies in every binary, because snapd doesn't duplicate the ones already installed.
Another big difference is that the sandbox of Snap is much more granular. Snap uses AppArmor to mediate the _existing_ Linux api's. This allows you to give an app access to usb-serial devices but deny it access to a USB webcam, for example. Or to give an application access to joysticks and controllers, but deny it access to a webcam.
Both Snap and Flatpak support XDG Desktop Portals to give desktop applications secure access to features such as a webcam, but this is a new API, application need to be ported to the new API before they can use these features.
As a result, most snaps have much less permissions than their Flatpak counterpart because the Flatpak sandbox needs to be largely disabled for the applications to work.
Flatpak can not currently run on servers, only desktops. I'm not sure if there are plans to change this, if anyone knows I'd be interested in finding out.
You seem to be suggesting that servers don't run dbus. It is currently running on all my servers and I'm pretty sure it is standard on all major server oriented distros.. Am I mistaken or did I misunderstand your point?
My memory is foggy, but there was something about dbus which rendered it unsuitable for systems without a running GUI on X/Wayland. Not sure what though, it's been years.
There's nothing about D-Bus in general that is problematic, but rather that flatpak relies on the D-Bus session bus and systemd --user instances[0]. It can be made to work just fine on a server, but it may not be the most convenient arrangement due to this dependency on user sessions.
BTW, OSTree is also the technplogy behind Fedora Silverblue, Fedora IoT and modern versions of Coee OS. Also flatpaks use OSTree internally for deduplication of all runtimes and flatpaks & for diff only downloads.
Actually, even for desktop Snaps have some advantages.
First of all, the fact that you can put a system service in a snap: The OpenPrinting project is working on creating a snap which contains the entire Linux printing stack. This is so users of older distribution releases can still use the latest printers.
Secondly, the Snap sandbox is a lot more granular than the Flatpak sandbox. Flatpak went for portability and Snap went for granularity. Snap uses AppArmor to mediate existing Linux kernel api's. As a example, you can give a Snap access to joysticks and controllers but deny it access to a USB webcam, for example.
XDG Desktop Portals should make it possible to safely use certain host features such as a webcam in a Flatpak, but this requires applications to be rewritten to use the portals api. In the current state, most Flatpaks disable large parts of the sandbox because the applications would otherwise not be able to run in it.
You might find Fedora Silverblue fascinating. It's an OS that uses Flatpak for Desktop Apps exclusively, which disposable "toolboxes" for non-Flatpak apps.
And all the systrm stuff is immutable and managged by OSTree. This makes it posdoble to diff only download a new system state and then atomically switch to it after reboot. And if you hit any issues with it, just boot to the previous one (you can have multiple previous states).
The fault isn't SquashFS itself but the compression they use. They use an old compression algorithm by default to support older kernels. They are working on optionally enabling quicker compression on distros that support it though.
> Snap is designed so each device only connects to a single store for three reasons:
> users can easily discover new applications,
> developers can easily publish their apps,
> and developing Snap itself is easier.
Just a PR bullshit to cover for lack of freedom and need for control. Disregard.
The better question is why is there a snap store? With flatpack and appimage already working well, why muddy the waters with another offering, much less one that it awful.
God, I could not hate Snap more. At least a .snap file can be mounted as a SquashFS and salvage what you can to run the program without snapd. Unrelated, but it is curious that some applications I have encountered depended on some proprietary library. To the bin it went. :D
It does not seem that the author is affiliated with Canonical. Ideally, Canonical should be the ones defending their product.
IMO, it is okay for the backend to be propriety as long there is a published, easy to implement protocol for talking to snapd (along with an accessible way for clients to configure snapd). Otherwise, there is lock-in to the service. It's not just the FOSS community that should be concerned, but business users as well. Do you want to internally distribute in-house snaps via Canonical's servers?
> DockerHub and GitHub are insanely popular and they are completely proprietary.
These are arguments against, not for, using these services. And many proprietary SaaS companies sell installations on on-prem air gapped servers.
Nix is really cool! You can actually use Nix to build Snap packages! Not sure if that's actually used a lot though.
One of the advantages of Snap over Nix is that snap has security builtin. Snap is built to support a store where anyone can publish any application without curation. Nix Flakes look really cool for third-parties to "publish" applications for Nix, but I wonder how they will do security..
In Snap, the default permissions of an app are stored and distributed out-of-band with the package itself. The package defines what permissions it _can_ use and the declaration defines what permissions it is _allowed_ to use by default. Users can then override the default permissions.
You could get the same level of confinement in a Nix package, but then the package contains both the app and the confinement, so the packager or software provider gets to decide what permissions a package has on your system. Any idea how Nix could address this?
The building of nix packages is completely sandboxed and secure. The running of them is currently an after-thought.
- https://spectrum-os.org/ is a big idea for secure running of applications, which leverages Nix for some things.
- I would like to see some CloudABI/Capsicum experiments, as anything that is not capability based seems baroque, difficult to, ill-fitting the problem at hand, and generally going to end in tears.
- There is currently some wrapping around systemd-nspawn which could be improved.
BTW when I say "The running of them is currently an after-thought" this isn't as bad as it sounds. I would say all the good solutions are not the scope of Nix, but instead the scope of NixOS, Nix home-manager, nix-darwin, etc. Still, a real problem that deserves a solution. Thanks for bringing it up.
People will be angry with your bluntness, but you are absolutely, irrefutably, correct.
Ubuntu is so much more stable and capable than any other distro that is ridiculous. If Micro$oft or "Crapple" gave users similar levels of reliability that other distros did, every single user on HN (EVERY SINGLE ONE) would be frothing at the mouth criticizing them.
Hell, until F32, Fedora would not boot to a usable system after installation without GRUB and config file tweaks on my workstation (TR1950X+GTX1080+dual 4k monitors).
This is also a reason more companies don't go into open source or target the GNU/Linux desktop. Whenever you do something that slightly disagrees with the Richard Stallman philosophy, or god forbid charge for software, they will froth at the mouth, attack you, and say stuff like "I will fork you and make your company obsolete". Nobody who sells software on the Mac App Store for example ever has to hear stuff like this.
> An Ubuntu SSO account is required to create the first user on an Ubuntu Core installation.
They just keep flinging shit at nothing and hoping to hit a wall they can build a gate in.
They're trying to boil the frog slowly with Snap on the Server/Desktop branches.
There is no possible genuine motive for these maneuverings to be in the position of gatekeeper.
If anyone at Canonical is listening, you should be aware it doesn't matter how slowly and carefully you approach this, or how you justify it, the first time I'm forced to kiss the ring to get my software to work, your software is gone from any system I own or manage, immediately and forever.
You're not going to get within a thousand miles of monetizing the ecosystem by gatekeeping it -- the moment you even so much as assert the position of gatekeeper you're trying to create for yourselves you're dead to me.