Debian is like democracy: the worst way of producing an OS, except all the others that have been tried from time to time.
BeOS, AmigaOS, Solaris, most other 80s OSes - they’re effectively dead. Windows and macOS have effectively died once already. The BSDs can stall for years at times. Most Linux distributions (including RedHat) are typically only as good as the fortunes of the commercial (or occasionally public) entity they have behind. In all this, Debian endures, with its slow but inexorable progress, simply because its ideological foundations - not its technical ones - are eminently superior to all the others. Debian contributors don’t do it for the money, so they will be there when money runs out; and they don’t do it for being cool either, so they will be there when OS work is not cool. People will come and go, but the ideal of the “democratic OS” will always be there - hence, Debian will be too.
> Debian is like democracy: the worst way of producing an OS, except all the others that have been tried from time to time.
How many of us would be happy working at a software company with a bug tracker from the 90s, artifact management done with FTP, little to no tooling to manage large changes and do code review, no standards around source control, etc.? Those are symptoms of a software development culture stuck in the past.
It would be pretty frustrating to go home from my day job, where we have a much better development workflow, and try to make a contribution to the Debian project using Debian's tools and processes. And I'm sure I'm not the only software engineer who would feel that way. That can become a real problem for the future of Debian if it's not addressed.
Cannot say for a company, but in Debian it seems that source code control is being settled with https://salsa.debian.org. For upstream code it is pretty flexible and low expectations are set, a tarball is enough. It also works with git (branches and/or tags), cvs, hg, etc.
There is no need for complexity in order to transfer files, that is why File Transfer Protocol works well. If you refer to public FTP services, those were deprecated 3 year ago. See https://www.debian.org/News/2017/20170425
Funnily I feel the exact opposite. I see Debian as Stable while most other development these days are like webdesign: Learn a tool or framework and start using it - but wait! There's a better one out now and hey look at this shiny new tool. I like stable.
> How many of us would be happy working at a software company with a bug tracker from the 90s
How many of us would be happy deciding things in groups with processes defined 100 or 200 years ago? But that's what representative democracy effectively does, every day, in most of the West. People just find ways to cope and move on, since the process is just a mean to an end.
> That can become a real problem for the future of Debian if it's not addressed.
Yes and no. Yes, processes should be improved all the time. No, it's not a real problem in the long run - I've been hear more or less the same story basically since Debian started, but it's still arguably the biggest and most relevant Linux distribution in existence. People come and go, the ideal endures.
And on top of that you also have to involve yourself into endless political discussions about minor topics while dealing with "complicated" maintainers
It's ironic this is the top comment, since the vibe I got from the post was that people waste way too much time discussing ideological things and not fixing any of the actual problems he encountered as a maintainer.
The thing that's wrong with the Debian project is that ideological stuff doesn't anymore attract talented engineers who are interested in working for free on something dry like package management.
From last two month there are 6 new Debian Developer and 10 new Debian Maintainers. This information is available at https://bits.debian.org
For me, working with packaging is a joy, once I learned how it works I see it as a thin wrapper around an upstream code base that, after built with whatever upstream tooling, is copied within a package alongside it's dependencies information.
I got a different vibe with the association to a Churchill quote[1]
I moved from Ubuntu to Debian a few years back, if I need anything beyond the ordinary I can always set it up manually and I am pretty content with that.
Debian is well known to not be a democracy. This is a do-ocracy. Votes are very rare and, except for systemd (2 in 10 years), are quite non-technical. The second key aspect is that you can't force anyone to do anything. The third key aspect is the project is unable to reach any consensus (there is always someone to disagree and do enough noise for the discussion to go nowhere and we cannot vote).
Most of the points listed by Michael derive from that. It's impossible to change something. We can only do small things for which no cooperation is needed.
> It's impossible to change something. We can only do small things for which no cooperation is needed.
And yet the migration to systemd, which required lots of changes in disparate packages, happened. And the migration away from python2 will happen too, albeit perhaps not as fast as the people driving it would like. And the new source format happened. And for repeatable builds - Debian leads the world.
Methinks "we can only do small things for which no cooperation is needed" might be overstating the case a smidgen. Lots of things with aren't small happen in Debian on a regular basis.
Migration to systemd is painful and it needed two GR and a lot of drama to move forward. We had to wait debhelper 10 for it to not be a hack in packaging (2016).
Migration away from python2 is wanted by doko, the Python maintainer. If he didn't want that, nothing would move. We were stuck for a long time with Python 2.6 because he didn't want to migrate to Python 2.7. As he is also maintainer of gcc and Java, nobody wanted to vote him out.
I may have missed the headlines around the new source format. Ack for repeatable builds.
What about bikesheds/PPA? Many discussions a few years back but mostly blocked because FTP masters want it to be integrated into DAK and under various other non-technical constraints.
> Debian contributors don’t do it for the money, so they will be there when money runs out; and they don’t do it for being cool either, so they will be there when OS work is not cool.
This is a great sentence, probably one of the most important (and underrated) ideas in FOSS and engineering more generally. A lot of critical work is not lucrative or glamorous - does your project recognize and support the people who do that work?
Windows NT and MacOSX are both scratch rewrites. Though the author of the comment may be referring to the fact that Android and iOS dominate the space now.
Classic Mac OS is about as dead as software can get: it's no longer developed or developed for, there's no backwards compatibility in its successors, and they don't even make hardware that can run it anymore.
Fedora is the experimentation lab for RedHat. Test on Fedora, include in RedHat enterprise, retire to CentOS. That's a similar model of Testing, Stable, OldStable of Debian. Just with a different flavor.
Fedora will be around as long as Redhat does, since it exists primarily as the RHEL unstable branch. Considering RH is now part of IBM, that might well be forever, but still, it’s largely about commercial involvement from a given company, like Ubuntu and Canonical etc.
While I like and use (and even recommend) Archlinux, Debian’s track record is absolutely venerable. It’s a huge accomplishment to carry so many people and an ecosystem along with you over decades, doing all the unsexy tasks (the number of packages!) and serving as a stable platform in support of user freedom, on top of which others can build nimbler and sexier offerings. The Debian project deserves our utmost respect for its effectiveness in organizing a community around a goal.
I'm a huge fan of Arch Linux and would say that it is the best for personal use.
However I wouldn't dream of running on a fleet of several. I currently have 6 nspawn containers I run, and it's not as consistent to ensure an update won't break it.
Debian is great if you are running many servers. It's slow moving rate is due to care of not breaking the world.
I like Debian as a community, but I think it would benefit from decentralization to speed up development.
The community is great, but current package management techniques and processes are the equivalent of SVN, with modern approaches like Nix or Guix being the equivalent to Git. In Debian, the whole tree of packages has to be in sync. That works well for Arch, as it is a rolling release, but IMHO that slows down Debian as it doesn't use their manpower efficiently.
Longtime ago, when Nix was not popular, there was a discussion in debian-devel about adopting Nix. It was probably premature. This discussion has resurfaced a number of times. I think currently they would benefit enormously from Nix or rolling out their own tooling that implemented equivalent ideas.
With such a big community and large package set, packages should be able to be decoupled from each other so that they can depend on different library versions and move at their own pace. Also, Nix-like tooling would allow to automate and test most package updates when upstream changes, or find common vulnerabilities and exposures (CVEs) automatically. Currently, there's a lot of manual intervention needed to do this.
This would also be advantageous for end users, as they could mix and match packages from different channels. PPAs are an inferior solution.
These are great points. You might consider bringing it up with Debian developers again.
Despite some of the difficulties the author mentioned in the article, Debian has successfully spearheaded some ambitious project-wide initiatives, like reproducible builds. So I don't think it's out of the question that they could vastly improve the packaging experience for both users and developers with something like Nix or Guix.
Of course the biggest question is: how does one get there from here? For example-- can the Nix packaging approach coexist and play nice with the current Debian packaging system for years to come?
> can the Nix packaging approach coexist and play nice with the current Debian packaging system for years to come?
Yes, Nix or an equivalent implementation like Guix stores all packages in a separate tree (e.g. /nix). In fact, Nix can be used outside NixOS. It's in fact quite popular in some distros and macOS.
Hence, rolling out Nix or an equivalent tool can be done smoothly. Both can co-exist nicely.
It really sounds quite workable as a solution, then. IIUC anyone in Debian could start work on this at any time, with really no disruption of the current system.
Of course the devil is in the details-- graphics drivers, bootstrapping, etc.
Debian is one of the only distributions releasing images for 32-bit x86, old PPC, and other less popular architectures[1]. Arch just targets 64-bit x86.
At this point there are no porters for i386 and it no longer has the porter waiver from the release team, so it is likely at this point that Debian bullseye will support 32-bit x86. Old PPC and many other architectures were dropped from releases many years ago. There were no replies to the roll call for porters yet, so it looks like Debian bullseye will just be amd64 too, unless people are replying privately for some reason.
Arch isn’t x64 only. Manjaro’s an Arch derivative, and is the preloaded OS on Pine Book Pro’s (they used to preload debian).
The switch is disappointing for me, since I’d prefer Debian with a minimalist wm. However, manjaro + kde is good enough for light usage, and definitely easier for more mainstream users.
I'll start off with a hyperbole: We don't have any of that.
But that isn't really true. Arch historically has always been a DIY distribution with an equally DIY contribution structure. Our leaders has been BDFLs for close to two decades until the process was formalized and we held our first project leader election this year.
There is probably a lot of bad things with a less formalized process, but it allows Arch to move fairly rapidly and decide things without a lot of internal politics.
Let me make a prediction: if Arch survives as long as Debian has, getting the same amount of contributors as Debian got, by the end its internal organisational structures will look a lot like Debian’s. It looks like at the moment it’s where Debian was about 20 years ago.
For personal use, maybe.
For fleet usage, production, set'n'forget servers and any critical role, no.
I can provision 100+ Debian servers in any configuration I want under 15 minutes by utilizing the features of the OS itself and, forget them after setting them up.
We actually lost one Debian server in a system room (in a rack of unlabeled cluster of identical servers) and, it was working flawlessly when we re-found it months later.
I love the idea of Arch Linux and it probably is worth everyone who's really interested in Linux trying it at least once. But it's also the only distro I have used for probably more than 10 years where I found myself having to edit my X config to try and get something to work. At that point I just backed away from the keyboard slowly and realised it wasn't worth my time.
I do still have a throwaway cheap VPS with Arch, but even then I can't recommend it because the security story is largely non-existent.
Up-to-dateness, no unnecessary distro-specific patching.
EDIT: My comment was ambiguous, I didn't mean that there are no Archlinux-specific patches; rather that there's more of an effort with Archlinux to let upstream be upstream.
But putting that aside, all distros need huge amounts of patching to make each package get along with the rest of the system. Without patching, many of them won't even build in the first place.
There are duplicate PKGBUILD files in the repository, so depending on the PKGBUILD and where it is, there might be 3 results for every 1 patch. In many cases there are two hits pr 1 patch.
I've read this at least a dozen of times, mostly on HN, and mostly by Archlinux advocates. Many people seem to ignore that Debian testing and Debian unstable are continuously updated (rolling releases). Please stop propagating false claims that taint Archlinux's community reputation.
Summary for people who like neither pictures nor tables:
* Debian Unstable (31k) has way more packages than Arch (9k without AUR), but the AUR (57k) has way more packages than Debian.
* The total number of packages that are at the latest upstream version are about equal for Debian (17k) and AUR (15k). Arch (without AUR) has way less total updated packages (7k).
* Arch has about the highest percentage of fully updated packages (85%), Debian is lower (72%), and the AUR is even lower (69%).
* NixOS rivals the AUR in number of total packages (53k), has a big margin in total latest upstream versions over everything else (24k, thus 30% more than Debian or Arch), but does not have as high as an update percentage (79%) as as Arch.
The numbers are not perfect because of split-packages and alternative packages (e.g. the AUR often has addtional `-git` variants), but they give a rough idea.
I used to run Debian testing and for a short time sid before switching Arch, and I had to reinstall them from 6 months at worst to two years at best, because of packages always breaking, system becoming unbootable after updates, etc...
In contrast, Arch has been both up-to-date and rock-solid - my current install has been carried over through three PCs since 2015.
I used to run Arch (2011-2014ish) on a personal server. I'd generally go a few months without updates, and large batches of updates were often painful, requiring manual steps... like the move to systemd, merging /bin and /usr/bin, and others I've forgot.
I have also had update issues with Ubuntu. There was a bug with Ubuntu 20.04 where a server would lose its default route when it had multiple network interfaces. And another bug where, after an update, network interfaces were renamed on a reboot rendering the server inaccessible. Is having a server with more than one network interface that unusual?
I have yet to find a distribution where updates are not problematic.
NixOS is designed so that updates won’t break the system in non-reversible ways. If an update didn’t work out well for you, you can always roll back to the previous version and withhold the update until you’re ready. I’ve used NixOS as a daily driver myself for years, and hadn’t needed a reinstall even once.
Never been brave enough to run Arch, but I've had Manjaro in a VM as a torrent/vpn/media server and it's been rock solid for like 4 years. I use ubuntu LTS for most things but I can't complain about Arch/Manjaro stability.
That's not how it works. Debian maintainers maintains packages from the very beginning of the process. They won't just wait until a package has entered stable.
Moreover, when comparing different distributions, it would make more sense to have a closer look at the release process rather than compare how they label their packages. Since Debian tests its packages for a longer period of time than Arch, Debian testing should be just as stable as Arch stable.
I think we're using the word "maintains" differently. Packages in Sid have no guarantees that they'll work, no security team, and no support system if you get stuck. Sid isn't meant to be used as a daily driver, and if your computer stops working that will be expected in Sid but a gigantic bug in Arch.
> Debian testing should be just as stable as Arch stable
Sure, but how up-to-date is Debian testing when compared to Arch?
> Packages in Sid have no guarantees that they'll work
Guarantee is a strong word. Can Arch guarantee this? Occasional breakage is bound to happen with bleeding-edge rolling releases.
> no security team
Weaker guarantees than stable, but that doesn't mean Debian doesn't handle security issues in unstable or testing. It'll be too late if they start dealing with security issues once a package enters stable.
That shouldn't matter much for people who're willing to use Arch as a daily driver.
> if your computer stops working that will be expected in Sid but a gigantic bug in Arch
A gigantic bug but still happens nonetheless.
> Sure, but how up-to-date is Debian testing when compared to Arch?
According to repology, Debian testing has twice the number of latest packages than Arch official [1]. Considering that packages of higher importance tend to be more actively maintained, I'd assume that Debian won't be significantly behind the latest release for packages that exist in both Arch official and Debian.
I've run it on all my desktop and laptop computers for 20 years and it's fine. However, the only package I upgrade automatically is Chrome. I do a full upgrade one or twice a year, and in the meantime I only upgrade packages as needed. The whole point of versioned dependencies is that you don't have to adhere to one particular snapshot.
I'm a fan of both operating systems, but I have had a much more pleasant having updated versions of packages by default in Arch than by heading over to testing or unstable on Debian - in other words, the newer packages on Arch felt far more robust than the unstable packages on Debian. This leaves aside the fact that stable Debian was far more stable than Arch for me.
By Arch standards Debian “unstable” or “testing” would be branded “stable.” If you can choose Arch stable for normal use, then you can do the same for Debian unstable too.
Words can mean different things in different contexts - “stable” and “unstable” in Debian refer to whether or not the major version numbers of included packages are going to change, not to how buggy they are.
> If security or stability are at all important for you: install stable. period. This is the most preferred way.
> If you are a new user installing to a desktop machine, start with stable. Some of the software is quite old, but it's the least buggy environment to work in.
> Testing has more up-to-date software than Stable, and it breaks less often than Unstable. But when it breaks, it might take a long time for things to get rectified. Sometimes this could be days and it could be months at times. It also does not have permanent security support.
> Unstable has the latest software and changes a lot. Consequently, it can break at any point. However, fixes get rectified in many occasions in a couple of days [...]
Superior documentation and AUR. You can find almost anything in AUR, including all the proprietary crap, and install it all with one command. It's also very easy to write a PKGBUILD and upload it to AUR if you don't find what you need, because the package format is so much simpler.
Here are two specific examples which other distros might struggle with:
repackaging a tarball to a proper system package which is tracked by the package manager
> Superior documentation and AUR. You can find almost anything in AUR, including all the proprietary crap, and install it all with one command. It's also very easy to write a PKGBUILD and upload it to AUR if you don't find what you need, because the package format is so much simpler.
I'm a Debian fan, but these two points are very true. Arch documentation is great, and writing PKGBUILD files is easier than packaging for distribution via Apt. I don't even use Arch, but I still release for it because it's easy.
Well, in my opinion the Arch Linux package manager is fast enough, but that might be influenced the years I was using Gentoo (Portage & Paludis) which aren't even part of his benchmark...
For those who are unaware, Portage and Paludis are both magnitudes slower than the other packages managers. However, the comparison is not completely fair, as they both where source based package managers in the beginning, which have to work a bit differently. Nevertheless, Paludis is still a lot slower when being used with binary packages as it doesn't take all the shortcuts the others take.
I have yet to see Michael discuss Nix without spreading inaccurate, flatly wrong information about it (Nix does not have post-install hooks, and `nix-env -i` being slow and unrecommended is a known meme that is addressed in upcoming tooling).
Nix fulfills every single requirement Michael has put forward (except the squash>tar thing, which I still don't understand).
That's all there is to this. I sympathize with Ericson's frustration. It's exhausting watching people re-invent inferior solutions to Nix, instead of just hopping in and fixing or using Nix. Of course, John Ericson is one of the few people motivated, qualified (and maybe has the buy-in) to make changes in Nix. I'm thankful for that on-going work.
It evaluates the build plan for everything we have, and then searches the things that were evaluated.
The new unstable CLI has an evaluation cache at least.
What most people do today is look for keys in the object and just evaluate what they need. (The Nix language is lazily evaluated so you can explore like this pretty well out of the box in the repl.)
Or, they just grep Nixpkgs :D.
-------
All of this, problems and solutions alike, is a weird situation to be in. I still stand by "just get rid of nix-env -i", but I want there to be better solutions too.
Nixpkgs seems overly complex, and is in some ways, but the fact its trying to herd a gazillion upstream packages that don't meaningfully coordinate makes this harder to fix than it should be.
$ time nix-env -qa > nixdb.txt
________________________________________________________
Executed in 470.40 secs fish external
usr time 25.11 secs 35.00 millis 25.07 secs
sys time 10.05 secs 27.00 millis 10.02 secs
$ ls -s nixdb.txt
736 nixdb.txt
A plain-text database of less than 1 MB, which took less than 10 minutes to generate. It is going to remain useful for my use-case ("see if a package is available in Nixpkgs without going to packages.nixos.org"). Now I can just use grep or rg.
Sure, Nix or Nixpkgs could have such (or a better) database native, but I don't see problem with above. Maybe someone cares to explain?
> The performance improvements distri provides, definitely some of them can apply to Nix. I think there are some low hanging fruit in Nix.
The other isuess do reflect some persistent rhetorical issues we've had with explaining Nix:
> There was little differentiation with Nix vs NixOS. (Some of his philosophical difference could be resolved by just using Nix.)
> There was no recognition that the "nix language part of Nix" is cleanly layered away from the layers that actually do the work of running jobs and moving files around, and can be replaced like Guix does.
So I do want our materials to highlight this so experimenters realize Nix is less of an all-in proposition than it sounds (if one is already willing to do the extra work of blazing their own trail).
A few blog posts on "how to make your own guix!" would be really neat. I'm not as deep as I'd like to be, but roughly it's a simply a matter of outputting a derivation that `nix-daemon/nix-build` can handle, right?
I do want to add a section to the Nix manual that describes what it is (data model, key abstractions, etc.). Everything today is some variation on "how to use it".
What are you mad at? You seemed frustrated he was unaware, and now you find that he was aware, you’re mad that the information on his state of awareness was difficult for you to find? Why does it matter to you? Or are you disappointed that his assessment does not match your opinion of Nix et al.?
I'm disappointed that the overhead of trying new things here is higher than I think it needs to be, because making these things from scratch means doing lots of not-innovative between the resesrchy bits.
I care because there are very few people who care this much about packaging, and if our efforts were less divided it would go a lot further.
I'm a little disappointed by what feels like a superficial take on Nix, but I am quite used to that now. And indeed, our documentation is bad about at explaining the essence of the thing.
NIH hobby projects with good ideas that are doomed to go to waste rub me the wrong way. They are only a temporary reprieve from the burnout felt by for those that author them, anyways.
Here's the thing, Nix is to slow (even ignoring `nix-env`'s terrible search functionality which should just be removed). What that requires is some good old boring profiling and optimization work. If he were to contribute that to a distro/package manager that basically shares his vision, this would be much more useful to the world.
Cause, at the end of the day, the work isn't so much maintaining the package manager as maintaining the packages. That's simply too much work for anyone to do alone.
http://blog.williammanley.net/2020/05/25/unlock-software-fre... is good piece on why the ultimate issues with packaging are social, not technological. At this point, when the vast majority of devs don't seem to act as if there is a commons that even needs integration, I don't think any 1-person technological solution is going to be so good as to upend the social situation.
I don't think this is intended as a one-person technological solution. It's a one-person research project, trying to see if there are any architectural changes that can be applied to other distros that people actually use, such as Debian or Nix.
It's true that the commons needs people willing to put in time and effort on boring things, but they have to be boring in the first place. If the author were to show up and say "Hey, Nix, if you rearchitect in this massive way it may or may not bring big improvements" and sent in a pull request, it would be rightly rejected. But it's still possible that a few days of rearchitecture can deliver the same results as a year of profiling and microoptimizations. The point of distri, as I understand it, is to have something to point to and say, this architectural change will actually work, and it's worth implementing in an actually-used distro.
Even if it were true that a hobby project is just a temporary reprieve from burnout... so what? They're free to do so. And maybe we'll all get something good out of it at the end.
The author has a history of delivering quality OSS projects: i3, Debian Code Search, RobustIRC, gokrazy.
It looks like distri have very different design goals from Nix with an emphasis on speed rather than rock solid reproducibility. Although some of the key details do seem to be inspired by Nix, that's no reason to discourage experimentation by calling it a travesty.
Not strictly related, but I was thinking about this the other day: how long will it be before some group or company creates a truly new operating system that takes off?
I mean, the windows kernel is going on 20 years old. the Mac OS kernel is based on unix, which is even older. Linux is also based on unix. As are Android and ios.
The more we add to these operating systems, the harder it becomes to walk away from them because we have so much invested in them.
Does this mean that in 200 years we'll still be using the descendants of these early operating systems? Under what circumstances would someone decide to start something truly new? And what would it take to ever reach a feature parity with the existing options?
And to be clear, I'm not saying there's a reason to walk away from these. I'm not an operating system programmer, I don't even really know that much about it. I'm just wondering if it will always make sense to just keep adding to what we already have.
There was a quote on slashdot long ago that has stuck with me:
> When I was walking into NEC a couple months ago with my good friend at Red Hat, I asked him why he worked at a Linux company. He told me, "Because it will be the last OS". It took me a while for that to really sink in -- but I think it has a strong chance at becoming true. Any major advances in security, compartmentability, portability, etc. will wind up in Linux. Even if they are developed in some subbranch or separate OS (QNX, Embedded, BSD), the features and code concepts could (and most likely will) find their way into Linux.
I think it's mostly true, and for me at least, Debian is the last distribution, because it's so well put together, IME. Same goes for Emacs, 'the 100 year editor'[0] and I've recently been getting into Common Lisp, 'the 100 year language'[1].
The Linux kernel has changed enough that you can't really say it's the same thing. Heck, even userland has changed substantially - who here remembers having to run MAKEDEV for userspace access to devices? But you can probably still find static binaries compiled in the late 1990's that will run on modern Linux. ABI wise, the Linux kernel is functionally backwards compatible, and that's nothing to idly dismiss. I think operating systems will keep morphing until someone makes a radical leap of progress that they can't adapt to.
That's not to say that there are no advances still to be made. But as many observed in the discussion on DevOps[2], much of the activity in the sphere of information technology looks like busy work and not progress.
> But you can probably still find static binaries compiled in the late 1990's that will run on modern Linux. ABI wise, the Linux kernel is functionally backwards compatible, and that's nothing to idly dismiss.
This would be one of my big objections to systemd - I seem to have gone from a very decoupled kernel userland (eg I can boot almost any media and then chroot into my system) to one where the kernel version and systemd are pretty tied together making things more difficult.
>But you can probably still find static binaries compiled in the late 1990's that will run on modern Linux.
On the otherhand it's random chance if a glibc-dep binary from a modern program compiled today will run on a linux install thats only 5 years old. And given the rapid addition of new compiler features it's getting to the point where GCC on a 5 year old install can't even compile a quarter of the programs written new today.
Containers outside of their useful server-side context, containers for desktop applications, are the fever of this future shock.
I think you are underestimating what AI will do to computer science once AI can reason about source code in a software system better. GPT-3 writing code from prose amplified by 1000x doesn't seem so far off. This will likely trigger sort of an Cambrian explosion of software systems.
I agree about Linux, but I disagree about Debian. It's too easy to create a new distro which is mostly compatible with everything. But to create a new OS, you are expecting everyone to rewrite all their software for the now OS.
As for common lisp, it's not even popular now, so I wouldn't expect it to suddenly become popular in 100 years.
It's more likely for these systems to slowly evolve into something new rather than being replaced entirely.
MacOS and Linux, at some level, are already very, very different than earlier unix systems. The BSD's are more conservative, but there's still systems being reworked across all of them (HAMMER in Dragonfly, pledge/unveil in openbsd, etc.). The ideas of unix will probably be with us forever, but the precise details of implementation are transient.
I suppose one could imagine something related to dataflow architectures, for example. But history still suggests betting on something that is an offshoot/evolution of basically all the mainstream operating systems we have today rather than a radical ground-up rethink.
The driver problem is even worse now than it was 20 years ago, in that there is quite a bit of hardware that we expect to work as a baseline these days. Each day things like FreeBSD become less and less of an option for general use, due to a lack of drivers. For example, it's a poor choice for newer laptops, as it doesn't have 802.11ac support.
New kernels and OSes that show up are at a massive disadvantage compared to Linux because of this. I think that an OS that prioritizes being able to borrow drivers from Linux or one of the BSDs would give itself an advantage.
The security model of today's desktop OSes is pretty lacking. There's no reason every application you run should have all of your authority right from the start, but generally right now they do. (Sandboxing can improve the situation but getting sandboxes right without restricting the user too much is difficult.)
A personally fascinating situation that doesn’t get talked about much:
1. App is sandboxed, and has to ask for access to every bit of your information (photos, contacts, etc)
2. App asks at a time when it seems reasonable (I’m taking a photo and need to save it)
3. Now app has the ability to exfiltrate everything you just gave it access to (like all photos), and it now has that ability (in the majority of cases) without on-device oversight
There's been talk about having the file picker dialog reside outside the specific app sandbox, and the app only receiving (revocable) capability to act on whatever the user authorized.
That is, on file save the app would be handed essentially an open, writable and closeable, file descriptor -- the app might not even know the name of the file it's writing to.
On file open, the app might be handed an open, read-only, file descriptor.
Making such a mechanism usable for both the end user and the app developer gets complicated for sure, but the idea of not just permanently allowing "read+write ~/Photos forever" is definitely out there.
Right, so there are few layers to this, and lots of them involve the system not following the principle of least authority. For example:
- If you're just taking a photo and need to save it, why does the app need access to all your photos? Surely an append-only capability would be sufficient.
- This depends on the app, but if you're just taking a photo why does the app need internet access? If the app is a typical camera app, it sure doesn't - you might often want to pass the data to an app that does (via e.g. a share sheet) but in general the camera app itself has no need to reach out to the internet. (And if it does, does it really need to be unrestricted access to the internet?)
- Why is it so easy for apps to request access to everything and so hard for the user to say "no, actually you only get to see this"? (iOS has been improving this lately but it's still a pretty rare feature.)
But yes also as you allude to, it's not obvious to the user what access a program has after it's been granted.
The latest iOS lets you grant access to only a limited subset of photos (and I believe separates that from write access). It's still annoying enough that you'll eventually grant access to everything though (i.e. the second time you use the photo picker it doesn't re-prompt for permissions, it just shows the one photo you already granted access to).
I think the answer to that lies in the past: what made us walk away from the OSes of the 70s/80s and move to new things? And in some cases, what made us stay?
Application inertia is hard to overcome. You really need a killer feature and developer adoption to have a chance.
I’d take exception with the idea that internet/web did a better job of defining interfaces. The web equivalent of Unix (i.e. something we’ll probably never move from, and will haunt us forever) is Javascript.
I think it's different because internet is literally interconnecting devices, some of which might not even be running an OS per say.
Whereas Apple can release a new device on new OS with some custom hardware/drivers and as long as the internet + web parts are compatible with those interfaces there is no issue for customers.
Maybe you're right though, but also isn't that what Posix was meant for?
Also for things like Wi-Fi, they define those standards but the underlying vendor implementation is totally custom and can get really ugly still, just like an OS.
The true benefit of hard interfaces is enforcing modularity, and the benefit of modularity is being able to upgrade a system. (At minimum, emulate & migrate over time)
Even "relatively" simple monolithic systems are nightmarishly complex to change.
Apple doesn't really give a shit about backwards compatibility, past the bare minimum to keep their platform devs from revolting.
POSIX, originally, was intended to provide a solution to every-vendor's-Unix problem. A nice side effect of it was absolutely that we got (mostly) standard interfaces.
I’d argue that Apple’s graphics-first approach did that.
OSX set them back in that sense (yes; and also ahead. Don’t @ me), but I personally think there are some current opportunities around making graphics-first (and task-first) operating systems.
I think some of that depends on what you count as "new" - if you're going to hold "inspired by" or "based on" against it, then there will probably never be anything new under the sun. This is both because, indeed, throwing away all existing standards/programs/drivers/interfaces requires an absurd investment and makes adoption unlikely, and because at some level it has all been done before - there just aren't that many ways to coherently store data, and files+databases+tagging turns out to cover pretty much everything.
> how long will it be before some group or company creates a truly new operating system that takes off?
You won't get another Windows / Linux / BSD. They do their jobs very well, and as WSL emulating Linux proves they are near interchangeable. I have no doubt Linux could emulate the Windows kernel near perfectly too if we had the Windows source to see what is required. When you've already go a slew of interchangeable parts, we create another one?
If something new is to displace them, it's going to have to do something very different. The only thing I can see is not a replacement, but a security kernel that allows us to establish a trust chain from the hardware to some application. It wasn't necessary when the hardware sits on your lap or desk, but now we are tending to rent out CPU cycles from some machine on the other side of the planet and yet still want to have some assurance of privacy, they are becoming kinda essential. There are a number of proprietary ones out there now, but I think that's doomed to fail. No one in the right mind would put their faith in a binary that could be in cahoots with anybody, from the USA government to a ransomware gang.
You can have totally different models based on the same kernels. Android (Linux), Chromium OS (Linux including bits of Gentoo), and iOS (Darwin / XNU / Mach and BSD) all come to mind as very different OSes from traditional UNIX systems in terms of architecture, especially with regard to package management, updates, user accounts and access control, sharing files or making IPCs across applications, resource management, etc.
The kernels are a small part in the overall picture. Userspace apps today are abstracted under layers upon layers of runtimes and libraries, to the point your interaction with the OS is minimal if any.
There’s simply no incentive to upend the current order; and to do that, you either need to control your hardware stack completely or get buy-in from vendors to develop drivers.
Mainstream OS platforms (on desktop, server, mobile) are driven by path dependency, inertia and network effect. Embedded side is slightly less constrained but heavily weighed by those as well. Whether it would technically make sense to start a new OS has only peripheral relevance unless there are giant, overwhelming advantages (and few disadvantages) vs incumbent ones.
Any credible attempt will have to be designed from the start to focus on comatibility with whatever it replaces, have huge investments to the ecosystem transition from the corporate owner, have credible commitment to the phaseout of the superseded OS, and be prepared for a very long transition period.
Will be interesting to see what happens with Fuchsia. Its success might be a big loss to open computing though.
By open computing I meant platforms that are user controllable and support general purpouse computing (insead of walled gardens).
Android has been steering away from that position even though Google does still choose to publish AOSP as code drops. Being based on Linux has definitely helped to keep open a while longer. It feels likely that Fuchsia would in the best case have a similar role as Darwin.
I have a feeling that quantum computing will require this change. Right now it's just proprietary tools to write and load software, but it will likely one day become a full OS, with a different paradigm than today.
It's not harder to walk away from them now than it was 20 years ago, probably easier now with so many things written in cross platform/web app format. I can't really agree with you. If someone comes up with a new OS that has some OMG 10X advantage (due to software or some new leap in technology) over current ones then it will "win" the OS war. I just don't share your concerns.
Here's my thoughts. They're not too well developed here to be honest. I'm probably wrong about some of this and need to research more.
The "new thing after Unix" is already here and has been here for some time. It's called Hurd. It depends on a microkernel who's only job is to pass messages from different components, but microkernels don't work well (at least not better than monolithic kernels) on x86 due to context switching overhead.
x86's maintain their architecture due to software compatibility. That's what we're really stuck on. A lot of improvements have been bolted on to x86, like cache, all the instruction stream acrobatics, SIMD, VMs, long mode, etc. but it's still "heavy" ; things are done to make it look fundamentally the same to existing programs.
When CPUs start becoming cheap little tiny things set in a superfast fabric to talk and cooperate with potentially thousands of other cores (like GPUs) as fast as they can chew local instructions, and all the stuff with I/O is worked out, we're ready to take the next step. You can see a little bit of the future with things like the Cell BE, but it needs to be much higher scale to change what OS is dominant.
Unix itself is ~20 years older than Linux's start.
NextStep implemented Unix atop a microkernel for security benefit.
Hurd splits all the subsystems of Unix into independent facilities that are connected via the microkernel. One of them can crash and not affect the rest of the system, but also I got the idea reading about Hurd that there's no particular requirement one subsystem lives on a specific CPU or even machine as long as they can communicate.
> but microkernels don't work well (at least not better than monolithic kernels)
In terms of performance, isn't "at least not better" (meaning, as good as) sufficient for performance? Because the point of microkernels afaik aren't about better performance, but security (edit - and robustness).
> on x86 due to context switching overhead
how high is the extra overhead on hurd over a more conventional OS?
A couple of healthy reminders to avoid drama and FUD:
1) People announce leaving publicly and the FLOSS community takes notice of it. This is a sign of health of Debian. In many other projects few people notices.
2) The number of Debian developers, projects, and packages has been increasing for decades.
3) For each person writing on mailing lists and blogs there are 10 people quietly contributing.
4) The same applies to the occasional flamewars. Vocal minorities are not representative of the thousands of DDs and contributors.
Exactly. I'm a long time Debian user, and after reading that piece I'm not worried that Debian is going to disappear, but I think he is raising good points that should be addressed.
I'm a DD. His points left me scratching my head. Yes, Debian's infrastructure is old, but so is Big Ben and just a Big Ben still does an admirable job of broadcasting the time to the locals, Debian Bug Tracker does an admirable job of tracking bugs. I get it that he personally might prefer to interact with it via a more modern web page, but that wouldn't alter how well it tracks bugs or facilitates discussions. And besides, I find email easier to interact with automatically than a web page. Oddly he goes on to list not being able to automate things as a complaint.
The same goes for the rest of the things he lists. Yes, he might prefer to do them some other way, but the way they are done now has obviously been working very well for a long time.
As for Debian being incapable of making big changes - that's just rubbish. He's been there for 10 years for pete's sake, systemd was a big change requiring many packages to updated spanning several years and several releases, while still delivering a working system as it happened. That's not big? How about altering the source package format, or moving away from sha1 for signing, or making everything build reproducibly, or moving all developers using its collaborative development platform from FusionForge to GitLab? Sorry, he's just plain wrong on the "can't make big changes point". Debian regularly makes big changes every major release. In Bullseye they will have made big strides in migrating away from Python2. Changes of this scale are things other projects regularly struggle with, but not Debian.
His posts lists a whole pile of things about Debian he's discovered he no longer likes, which is fair enough. But as he says in his introduction it's him whose that's changed, he gone from a student with lots of time on his hands who was happy to be part of loosely collaborating group to being a member of a very focused and highly directed team at work, and he's discovered he prefers the latter. Great, I get it, happens to all of us. But that doesn't mean the Debian no longer works. It clearly works very well. It just means he's no longer a great fit for that way of doing things.
Big Ben is a Victorian hand wound clock. It’s accuracy is maintained by moving a stack of old English pennies balanced on its pendulum. It must be wound by hand three times a week, and it takes one and a half hours to wind every time.
My guess is that the author of the piece we are discussing would find this to be an excellent analogy for the Debian processes he is complaining about.
At least one of his points is already addressed by the twitter thread he links: Debians bug tracker behaves correctly, Gmail just ignores the information included in the email headers to group them.
So he has been told that this is a Gmail issue, but insists on using it. Meanwhile he complains that the rsync package maintainer is blocking his changes out of personal preference. Double standards much?
Gmail is an immensely popular email provider with 1.5 billion users in 2018. It has 65% of US market and even on a global level it's the leading email provider.
This is not some esoteric email client that a handful of developers refuse to let go.
But it still is the source of the faulty threading behavior. The issues also seems minor enough that there is no point in adding a hack on Debians side to make the output on googles proprietary endpoint look better.
I mean, I'm also not sure if his "FUD avoidant" post has the desired effect the way it is posted, but "damage control and image management" suggest that Debian would be a Company and a PR department doing such a thing.
That's simply not the case, and the OP is right that there were many Debian Developers in 2019 and there are many other Debian Developers now.
Not saying that does not mean that it won't be noticed, or just brushed over, if a prolific member decides to step down.
> The piece was cogent, respectful, and constructive.
Just for clarity, I agree with that.
> How about addressing his points?
Who, the OP? Even his nick name suggests affiliation with Debian, there is normally support and action of more people required to bring bigger change.
But yes, IMO Debian surely needs to continue to adapt or be doomed to frustrate more developers in the future.
Things like (from the blog post):
> I tried to contribute a threaded list archive, but our listmasters didn’t seem to care or want to support the project.
Just seems baffling to me, he proposed to do the actual work (and with his record one could be certain that he'd follow through) of a feature where one can only win (i.e., don't like it? Just continue to use what you like).
Such resentment against unproblematic changes, bringing value to some group but not taking away value from others, is tedious and demotivating.
But who takes up the fight to change Debian? In the end it probably needs to come from within, i.e., a sizeable part of Debian Developers need to drive and push forward, or at least reduce the barriers for those who wish to do so respectfully, without breaking what is now.
> "damage control and image management" suggest that Debian would be a Company and a PR department doing such a thing.
No, damage control or image management in no way implies a Company or PR department. Any group can engage in these activities. Later you point out, the OP appears to be affiliated with Debian.
> Who, the OP? Even his nick name suggests affiliation with Debian, there is normally support and action of more people required to bring bigger change.
Yes, the OP. I perhaps should have used the words ‘commenting on’, or ‘responding to’ instead of ‘addressing’.
I am not expecting the OP to solve the problems, but I am suggesting that it would be more constructive to comment on the substantive content of the original article than to write innuendo about how many people are just quietly contributing, or implying that the author may be part of a ‘vocal minority’.
> No, damage control or image management in no way implies a Company or PR department. Any group can engage in these activities. Later you point out, the OP appears to be affiliated with Debian.
1. I said it seems he is affiliated, but anybody can nick name himself a variant of "debian developer" in any forum.
2. It implies that a formal body of the organisation, that can be a single person like the DPL, else it's not damage control by Debian like you suggest, but that of a single person - which can hardly be framed as damage control in this case, the blog did clearly refer to Debian as a whole, not a single person.
> I am not expecting the OP to solve the problems, but I am suggesting that it would be more constructive to comment [...]
That's what you say now, but not what you said originally. As said, change needs to come from Debian within, not some HN discussions - talk is cheap.
> I'm in no way responding to the blog, as you can see in my 4 points.
Your opening line in no way makes that clear:
> A couple of healthy reminders to avoid drama and FUD:
If your intent was to "in no way respond to the blog," you should have instead written something like this:
"Unlike the article, it seems like a lot comments here are intent on spreading FUD about Debian..."
> I recommend attending Debian events in person (once COVID is gone) to see that 99% of interactions between people are very friendly.
In the meantime, I'd recommend reading the blog: in it a Debian developer mentions having very friendly interactions with other Debian friends before diving into a technical, respectful, and detailed critique of the developer UX in Debian.
OS work used to feel so ceremonial, back in the old days (ie 2003.)
Hobbyists could cobble together hardware and ship it to a data center, but if you couldn’t afford a serial console you’d have to unrack the machine every time you messed up a kernel upgrade.
Now I can just remotely blast a clean install on bare metal, and then build containers or VMs on that, and it’s so easy I can rebuild all infra every morning, from scratch, just for fun (ahem, to verify it’s idempotency.)
Gone is the need for the high quality package management of Debian: The Universal Operating System. I lament this as much as I lament the decline in quality and commoditization of many many other things in life. Food, ISPs, journalism, education.
Young whippersnappers. The old days were when you had to order a CD of Slackware in the 90s and hope your disk controller was supported.
All great tooling becomes a victim of its success. Packaging is so good now and simple that you don’t have sysadmins anymore, and many companies know little or nothing about what they are doing. So people build the golden build and clone away.
Reading this I am reminded of when I first tried debian in the 90s and its packaging blew everything else away. Nobody had anything like apt-get.
The top complaint, as it was for decades, was that packages were out of date. But I don't think there was as much cultural attitude that running a package from a few months or a year ago was as much a cardinal sin as it is today. People here suggest switching away from projects if there hasn't been a commit in two months. Back then, it wasn't so painful to deal unless you ran into the need for a very specific feature or bugfix. Then there was testing and unstable.
And for readers unfamiliar with Debian: Debian testing is what other distros would call stable. Debian stable is what you'd pick to run, for example, a nuclear powerplant or the like. Debian testing is probably what you want to run on your desktop system.
> All great tooling becomes a victim of its success. Packaging is so good now and simple that you don’t have sysadmins anymore
Too true. A lot of my interaction with sys admins at work is "hey can you install this package from the repo?" and only because I don't have administrator privileges. Not to downplay our sys admins, but it feels like they are overqualified sometimes.
You had a Slackware CD? The old days were when you had a stack of only 20 floppies to install SLS with (of the required ~44 or something), had to download the disk contents at school and carry them home, and some of your floppies ended up having read errors. It took literally days of going back and forth, with your home PC sitting in the installer waiting for the next floppy.
> Gone is the need for the high quality package management of Debian: The Universal Operating System.
Well, almost. What's inside your container, though? For anything of moderate complexity, my containers always end up having some apt-get or yum installation in them; it's not like I want to Dockerfile up the manual from-source install steps for every small package, nor do I expect to have perfect upstream Dockerfiles to FROM that include all the exact bits I need....
Not to mention having stuff built for you for all sorts of different architectures. It generally doesn't matter if you're running x86, ARM, whatever if you're installing packages from Debian; by and large everything in the distro is packaged for every architecture and they generally just work.
Yes, but they are usually single task hosts. All the effort that maintainers put into keeping /etc tidy when multiple pieces of software are installed on the same host are for nought.
I swear 20 years ago was so much better than everything today. Life was simpler, more people knew their places, it was hard to install an OS and good quality hardware was much harder to get hold of. Few whipper snappers mussing up the place--metooing and muhrightsing, it's hard to express to these poor souls in 2020 how much better we were as a superior generation both computer-wise and societally.
This probably got posted because he reworked the blog theme and fixed the post. This rebumped the post in the RSS feeds. AFAIK nothing new has been added?
This is exactly what happened. Saw the post pop up in Newsblur this morning in Planet Debian feed, but I didn't notice the original date since I read it in "reader" view, otherwise I would have tagged it with a 2019.
I didn't see it the first time around, and it caught my attention since I've had an interest in becoming a Debian Maintainer for about 15 years now, just never fully did it (even have a GPG key signed by a few Debian Devs) and a lot of the frustrations echoed with me.
Reading this I'm glad I'm working on Fedora.: one git per package, one tool to rule them all (fedpkg), one tool to sync them all locally (grokmirror), (relatively) easy global changes through proven packagers who can ask for mass changes before each release.
I don't know how I would do it if I had to deal with svn, mercurial or no scm at all. I understand the want of being decentralized but this is being done at the maintainers expense, who are often already stretched thing.
So, last time I used Fedora was around 13; at that time it seemed like major changes to how things were configured was a part of every version bump and trying to deploy it to multiple machines was an exercise in frustration as the next release would come along and blow away a lot of hard work and necessitate a redo. Moved away from it and over to Ubuntu where the LTS resulted in less work for me managing labs of machines.
Is it better now? Stabilizing? Anything to actually set it apart that you'd call out specifically as being advantageous for Linux on the desktop?
Yeah, Fedora stabilized the upgrade process a lot in its twenties, both from what I've heard other people talking about and my own experience pulling a couple of workstations from... I wanna say version 23 to 28? (I switched off for unrelated reasons.) Very usable now.
dnf/RPM is the biggest offender in his benchmarks when it comes to package manager speed. Not only is metadata for a package an order of magnitude larger, the package manager itself works slowly.
Yeah. There have been multiple projects, even from inside Red Hat, to try to switch away from the text-based Berkley DB to any sort of reasonable database ( one example was razor: https://github.com/krh/razor ), but for a variety of reasons, it was never able to be dethroned.
except that debian's apt repo style have been there for decades and still worked well, redhat/fedora's pkg and development model instead had changed who knows how many times, and of course the current one is the golden one, until next one arrives that is.
The current RPM repository metadata format has existed since Fedora Core 2 in 2004. The build system infrastructure changed to Koji in 2007[1]. The development model has changed exactly once when Fedora switched from CVS to Git in 2009[2].
There have been no significant changes to Fedora packaging model until three years ago, when Modularity was introduced[3] and Pagure was deployed to ease contributions and support building modules[4]. And the modularity concept is primarily used for alternate software streams in Fedora, so the vast majority of Fedora packages don't use this feature.
Looking to openSUSE, community that i contribute to, i have the same feeling about almost all points. To keep having bugzilla around, makes me crazy. 2020 and you still not able to delete a comment.
Wow, I'd love to know the story behind that. My guesses (A) No one cares enough to add it (B) some core maintainer is actively hostile to the idea (maybe someone deleted their bug report once and they've been getting revenge on the world ever since) or (C) the codebase is too fucked up for anyone to figure out how to add a delete button.
Industry-wide institutionalised hoarding disorder that condemns us all to live atop the ever-expanding junk heap of the past, with the hoarder's justification "might come in useful one day". And that workaround is an accidental bug.
A world where big important decisions made in person go completely undocumented, yet weeks of bickering and bikeshedding over nothing are kept forever.
> When I joined Debian, I was still studying, i.e. I had luxurious amounts of spare time.
I felt so busy in college, what a tragedy! If only I know what it mean later in life.
The best part about college for me is I could disappear for the summer, going somewhere and doing what I wanted, knowing that my life would wait for me until Fall.
BeOS, AmigaOS, Solaris, most other 80s OSes - they’re effectively dead. Windows and macOS have effectively died once already. The BSDs can stall for years at times. Most Linux distributions (including RedHat) are typically only as good as the fortunes of the commercial (or occasionally public) entity they have behind. In all this, Debian endures, with its slow but inexorable progress, simply because its ideological foundations - not its technical ones - are eminently superior to all the others. Debian contributors don’t do it for the money, so they will be there when money runs out; and they don’t do it for being cool either, so they will be there when OS work is not cool. People will come and go, but the ideal of the “democratic OS” will always be there - hence, Debian will be too.