Hacker News new | past | comments | ask | show | jobs | submit login
Debian discusses vendoring again (lwn.net)
195 points by Tomte on Jan 13, 2021 | hide | past | favorite | 159 comments



A snippet of a quote from Pirate Praveen in the article.

> All the current trends are making it easy for developers to ship code directly to users. Which encourages more isolation instead of collaboration between projects.

This is where I would respectfully disagree. As a dev, packaged libraries from the system are often fine – until I hit a snag, and need to work with the devs from another project to work out a fix. With cargo/node/yarn/poetry/gopkg/... I can send a PR to another project, get that merged, and vendor in the fix while all of that is happening.

If I can't do that, I'm left with hacky workarounds, as upstreaming a fix and then waiting up to 12 months (if I'm on a six-month release tempo OS) for the fix to be available to me is just not practical.

Being able to work on a quick turnaround with dependencies to fix stuff is one of the huge wins with modern build tools.


Even as an engineer, I usually draw a line between "stuff I like to hack on" and "core system components that I'd rather not touch." I'm fine pulling a dependency from nightly for a project I'm working on, or because some program I use has a cool new feature I want to play with. But I probably wouldn't do that with, say, openssh.

I can certainly sympathize with this:

> and then waiting up to 12 months (if I'm on a six-month release tempo OS) for the fix to be available

but the needs of system administrators are not the same as the needs of developers. That's why my development machine is on a rolling release, but my servers run Debian stable with as few out-of-repository extras as possible.

Those servers are really fucking reliable, and I don't need a massive team to manage them. Maybe this sort of "boring" system administration isn't as popular as it used to be with all of that newfangled container orchestration stuff, but this is the core of the vendoring argument.

Installing who-knows-what from who-knows-where can work if you're Google, but it really sucks if you're one person trying to run a small server and have it not explode every time you poke at it.


> but the needs of system administrators are not the same as the needs of developers. That's why my development machine is on a rolling release, but my servers run Debian stable with as few out-of-repository extras as possible.

As another code monkey-cum-sysadmin, I very much second this: my servers are Debian stable without non-free, and there's damn good reason for that.

I can appreciate GP's argument, and I've been there, pulling down CL libraries for playing around on my development machines. But what GP leaves out is that more often than not, those distribution-external packages break, and if I was relying on them, I'd be left holding the bag.

I do agree, there is a problem (that the LWN article goes into), and it definitely needs attention. Distributions might be able to handle newer ecosystems better.

But for all the awesome whizbang packages of NPM, QuickLisp, etc, developers need to realize that sysadmins and especially distro maintainers have to look at the bigger picture. Maybe consider that if your software introduces breaking changes or needs security updates on a weekly basis, it isn't production ready.


I wonder if those packages that have been de-vendored by Debian developers, using kludges that are not supported upstream, meet your stability expectations. Not to be critical of your point, because I agree entirely.


> Maybe this sort of "boring" system administration isn't as popular as it used to be with all of that newfangled container orchestration stuff

Indeed, and all the "newfangled container orchestration stuff" also needs to run somewhere stable.


The point of vendoring and not using dynamic linking is to avoid spooky action at a distance that screws up everything.


I've been maintaining a Node.js app for about five years and almost all dependencies have been "vendored"/locked with forked libraries because some of the dependencies have been abandoned, and some have switched owners where the new owner spend their days adding bugs to perfectly working code due to "syntax modernization", or where the maintainer didn't accept the pull request for various reasons. Software collaboration is not that easy, especially if it's done by people in their (very little) spare time.


> Software collaboration is not that easy, especially if it's done by people in their (very little) spare time.

This is true, but somehow the Node ecosystem has managed to do it worse than those that came before it.

At the risk of sounding elitist, I submit that this phenomenon is due to a flood of novice developers that are entering the industry via six week "learn to code" boot camps.


I think you misunderstood what he is talking about.

The issue he's addressing is that you don't care about other projects also using this library.


> By trying to shoehorn node/go modules into Debian packages we are creating busy work with almost no value.

Another problem, at least with python I've encountered this, is that the debian packages sometimes seem to fight what you downloaded via pip. It's not made to work together. I'm not a python dev so it was very confusing to figure out what is going on, and I wouldn't be surprised if it would be similar if you mix npm and deb packages for js libs. They don't know of each other and can't know which libs were anyway provided by the other, then search paths are unknown to the user etc. I think I went through similar pain when I had to get some ruby project going.

My gut feeling is that it would be best if debian only supplied the package of the software in question and let's the "native" dependency management tool handle all the libs, but I guess that would get the debian folks the feeling of loss of control, as it indeed makes it impossible to backport a fix for specific libs; rather you'd have to fiddle with the dependency tree somehow.


> the debian packages sometimes seem to fight what you downloaded via pip

It's a bit annoying, but there are simple rules and it applies to pip/gem/npm the same (not sure about go): For each runtime installation you have a place for global modules. If you installed that runtime from a system package, you don't touch the global modules - they're managed using system packages.

If you install the language runtime on a side (via pyenv, asdf or something else) or use a project-space environment (python venv, bundler, or local node_modules) you can install whatever modules you want for that runtime without conflicts.


Put more simply: never run `sudo pip install foo`. That's never expected to work, and it's a pity it doesn't just give a simple error "don't do that!" rather than sometimes partially working.

As you said, you should start a new environment instead and install whatever you like into that. For Python, that means using virtualenv or python -m venv. You can always use the --system-site-packages switch to get the best of both worlds: any apt install python3-foo packages show up in the virtual envioronment, but you can use pip to shadow them with newer versions if you wish.


I basically don't let anything but the package manager touch /usr. Too many mysteriously issues in systems that had gotten screwed up. It's extremely rare that it's necessary for any project: if you need to build and install some other project you can generally just install in a directory dedicated to any single codebase you may be working (with appropriate PATH adjustments which can be sources from a shell script so they are isolated from the rest of the system). I really dislike tutorials and guides which encourage just blindly installing stuff into system managed areas but it's rife.


> I basically don't let anything but the package manager touch /usr.

That's the standard approach. Custom system-wide packages (as opposed to packages that are only installed for one user) should go in /usr/local/ or in a package-specific directory under /opt/.


The comment you're replying to is a bit ambiguous. Did they mean don't put anything directly in /usr (i.e. except /usr/local)? Or did they mean don't put anything anywhere under /usr? Both are consistent with my comment.

Personally, I stopped using even /usr/local (or /opt) many years ago. If it's not managed by the operating system then it goes in my home directory (except a few things in /etc that have to go there).


Exactly. Stuff in /usr/local is very capable of messing with other parts of the system (plus it munges everything not package managed together, which is even worse).


Pip has that feature now. Put this in your ~/.pip/pip.conf:

  [global]
  require-virtualenv = true
and then you get errors like:

  $ pip install foo
  ERROR: Could not find an activated virtualenv (required).


That's not exactly the same. I might be fine with installing Python packages with pip into my home directory, just not /usr.


My workflow's been to temporary disable that, do the stuff I need, then re-enable it. It's a bit clunky, but I don't install stuff outside a virtualenv frequently enough for it to be a major pain in the neck.


Yeah I use pyenv + virtualenvwrapper and there are a few packages I am fine with having in the top-level pyenv version, rather than in any particular virtual environment: black, requests, click, etc.


This is simple but completely counterintuitive. I've seen it go wrong hundreds of times and has been subject to a bunch of different workarounds (e.g. pipx).

Debian should probably ship a separate global python environment for Debian packages that depend on python where it is managing the environment - one with a different name (e.g. debpy), a different folder and, preferably, without pip even being available so that it's unlikely people will accidentally mess with it.

This could have isolated the python 2 mess they had for years also, de coupling the upgrade of the "python" package from the upgrade of all the various Debian things that depended on python 2.

really, it's easier to make "apt install python" be the way to install python "on the side".


> without pip even being available so that it's unlikely people will accidentally mess with it.

This has already happened. It only resulted in lots of "ubuntu broke pip" posts rather than understanding why that happened. (the fact it's not entirely separate from venvs didn't help) But considering that issue, imagine what would happen for people running `apt install python` and not being able to run `python` or `virtualenv`. Most setup guides just wouldn't apply to debian/ubuntu and they can't afford that.


Yeah, of course it did! That's why my primary suggested fix wasn't "just removing pip" but hiving the Debian managed python environment off somewhere different and calling it something else and with different binary names (e.g. debsyspython) that debian package authors could rely upon.

Then the default "python" and "pip" could be without debian dependencies and users could go wild doing whatever the hell they want without messing up anything else in the debian dependency tree (like they would with pyenv or conda).


I don’t have much to add but Python maintainers have been suggesting solutions like this for years, and IIUC Red Hat distros use a similar approach you described. Debian devs refuse to bulge, like they always do on many topics, for better or worse. They are not going to do it, not because your approach is technically wrong, but does not fit their idea of system packaging.


This was probably considered and discarded because altering all references in all packages would be a ton of work, and bound to produce issues with every single merge.


If it's truly an unmanageable amount of work that's a sign that there are other bugs/problems lurking that need fixing.

If they did consider it and reject it I imagine it is more likely it was about avoiding backwards compatibility issues than the amount of work.

This would also signal that there are deeper bugs lurking that need fixing, however.


This is good advice for software lifecycle management in general:

https://wiki.debian.org/DontBreakDebian


Sure they may have to fiddle with the dependency tree, but Node & Go both have well defined dependency formats (go.mod, package.json). It should be relatively easy to record the go.mod/package.json when these applications are built, and issue mass dependency bump & rebuilds if some security issue comes up.

Really seems like the best of both worlds, and less work than trying to wrangle the entire set of node/go deps & a selection of versions into the Debian repos. I mean Debian apparently has ~160,000 packages, while npm alone has over 1,000,000!


> mass dependency bump

That’s not an option for Debian stable. They intentionally backport security and stability patches, and avoid other changes that might break prod without a really good reason.

https://www.debian.org/doc/manuals/debian-faq/choosing.en.ht...


The situation with backporting security fixes is still the same. Debian could backport the fix to any node/go lib the same way they backport security fixes to C libs.

The only difference is that a backported fix in a language that uses vendored dependencies rather than .so's needs to have all depending packages rebuilt.


Debian Developer here. Backporting fixes to tenths of thousands of packages is already a huge amount of (thankless) work.

But it's still done - as long as there's usually one version of a given library in the whole archive.

Imagine doing that for e.g. 5 versions of a given library, embedded in the sources of 30 different packages.


I'm sorry to hear that it's thankless. Thank you for doing it. It is one of the pillars of my sanity, and I am not exaggerating.


Can’t you just use “update-alternatives” to set the versions you want?

https://wiki.debian.org/DebianAlternatives


Debian policy is very sane (no network access during build), but it does seem like modern software just assumes that the Internet is always available, and all dependencies (including transitive) are out there.

The assumption is a bit fragile, as proven by the the left-pad incident ([1]). I hope that whatever the outcome of the discussion in Debian will be, it would keep the basic policy in place: not relying on things outside of the immediate control during package builds.

1. https://evertpot.com/npm-revoke-breaks-the-build/


Debian is incredibly conservative about versioning/updates and faces a lot of pressure to move faster. I hope they keep the same pace or even slow down.

The world will keep turning.


> Debian policy is very sane (no network access during build)

openaSUSE has that policy, too. And I’m pretty sure the same applies for Fedora.

You don’t want to rely on external dependencies during build that you can’t control.

That would be a huge security problem.


The whole "download during build" thing is a minor issue; k8s, for example, puts all their dependencies in the /vendor/ directory, and AFAIK many toolchains support this or something like it. And even if they don't, this is something that can be worked around in various ways.

The real issue is whether or not to use that vendor directory, or to always use generic Debian-provided versions of those dependencies, or some mix of both. This is less of a purely technical issue like the above, and more of a UX/"how should Debian behave"-kind of issue.


I don't think that aspect of Debian Policy is in any danger of changing, nor should it.


It’s also not very Debian-specific. It applies to openSUSE as well, for example.


I am worried here that the alternative we'll end up with is applications that rely on vendoring will end up distributed entirely outside the Debian repositories... hopefully with go get/npm install, hopefully not with "download it from my website!"... But either way you lose a lot of the benefits that being officially in the Debian repos would bring. Devs want to distribute their software to users, and they aren't going to chase down rabbit holes to get it packaged to comply with every different distributions set of available dependency versions.

Really this idea that a distro (even a large well maintained one like Debian) has the resources to package a set of known versions of go/node packages for common open source software seems wrong? If they aren't going to package every exact version that's required, how is it going to be possible to test for compatibility? There is no way. And no dev is going to downgrade some random dependency of their app just to comply with with Debian's set of available versions.

Developers hate this versioning issue with languages like C\C++ on Linux, it's a huge pain. And that's partially why dependency management in languages like Go/Node work the way they do. A multitude of distros with slightly different versions of every lib you use is a huge headache to dev for, so people have designed languages to avoid that issue.


There' has been always a split between software that is expected to be run for 10 or 20 years and software that will be obsoleted in 2 years.

https://www.cip-project.org/ aims to backport fixes to released kernel for 25 (twenty-five) years.

Because you don't "npm update" deployed systems on: banks, power plants, airplanes and airports, trains, industrial automation, phone stations, satellites. Not to mention military stuff.

(And Debian is much more popular in those places that people believe)

> Devs want to distribute their software to users, and they aren't going to chase down rabbit holes to get it packaged to comply with every different distributions set of available dependency versions.

That's what stable ABIs are for.

> Really this idea that a distro (even a large well maintained one like Debian) has the resources to package a set of known versions of go/node packages for common open source software seems wrong?

Yes, incredibly so. Picking after lazy developers to unbundle a library can require hours.

Backporting security fixes for hundreds of thousands libraries including multiple versions is practically impossible.

> And no dev is going to downgrade some random dependency of their app just to comply with with Debian's set of available versions.

Important systems will keep running in the next decades. Without the work from such developers.


That's already the reality for most of this century. Openjdk, go, rust, docker, npm/yarn, etc. all provide up to date Debian, Red Hat, etc. packages for what they offer. There's zero advantage to sticking with the distribution specific versions of those packages which are typically out of date and come with distribution specific issues (including stability and security issues).

Debian's claims to adding value in terms of security and stability to those vendor provided packages are IMHO dubious at best. At best they sort of ship security patches with significant delays by trying to keep up with their stable release channels. Worst case they botch the job, ship them years late, or introduce new bugs repackaging the software (I experienced all of that at some point).

When it comes to supporting outdated versions of e.g. JDKs, there are several companies specializing in that that actually work with Oracle to provide patched, tested, and certified JDKs (e.g. Amazon Coretto, Azul, or Adopt OpenJDK). Of course for Java, licensing the test suite is also a thing. Debian is probably not a licensee given the weird restrictive licensing for that. Which implies their packages don't actually receive the same level of testing as before mentioned ways of getting a supported JDK.

On development machines, I tend to use things like pyenv, jenv, sdkman, nvm, etc. to create project specific installations. Installing any project specific stuff globally is just unprofessional at this point and completely unnecessary. Also, aligning the same versions of runtimes, libraries, tools, etc. with your colleagues using mac, windows, and misc. Linux distributions is probably a good thing. Especially when that also lines up with what you are using in production.

Such development tools of course have no reason to exist on a production server. Which is why docker is so nice since you pre-package exactly what you need at build time rather than just in time installing run-time dependencies at deploy time and hoping that will still work the same way five years later. Clean separation of infrastructure deployment and software deployment and understanding that these are two things that happen at separate points in time is core to this. Debian package management is not appropriate for the latter.

Shipping tested, fully integrated, self-contained binary images is the best way to ship software to production these days. You sidestep distribution specific packaging issues entirely that way and all of the subtle issues that happen when these distributions are updated. If you still want Debian package management, you can use it in docker form of course.


> That's already the reality for most of this century. Openjdk, go, rust, docker, npm/yarn, etc. all provide up to date Debian, Red Hat, etc. packages for what they offer. There's zero advantage to sticking with the distribution specific versions of those packages which are typically out of date and come with distribution specific issues (including stability and security issues).

The advantage is the very reason one would choose Debian to begin with — an inert, unchanging, documented system.

A large part of this problem seems to be that users somehow install a system such as Debian whose raison d'être is inertia, only to then complain about the inertia, which makes one wonder why they chose this system to begin with.

> Debian's claims to adding value in terms of security and stability to those vendor provided packages are IMHO dubious at best. At best they sort of ship security patches with significant delays by trying to keep up with their stable release channels. Worst case they botch the job, ship them years late, or introduce new bugs repackaging the software (I experienced all of that at some point).

Evidently they add value in terms of stability, but methinks many a man misunderstands what “stable" means in Debian's parlance. It does not mean “does not crash”; it means “is inert, unchanging” which is important for enterprises that absolutely cannot risk that something stop working on an upgrade.

> Shipping tested, fully integrated, self-contained binary images is the best way to ship software to production these days. You sidestep distribution specific packaging issues entirely that way and all of the subtle issues that happen when these distributions are updated. If you still want Debian package management, you can use it in docker form of course.

Not for the use case that Debian, and RHEL attempt to serve at all — these are systems that for good reasons do not fix non-critical bugs but rather document their behavior and rule them features, for someone might have come to rely upon the faulty behavior, and fixing it would lead to breaking such reliance.


That's why most shops deploy docker containters: it's not convenient at all for them for Debian, Red Hat, etc. to repackage the software they deploy or be opinionated about what versions of stuff is supported. For such users, the OS is just a runtime and it just needs to get out of the way.

Ten years ago, we were all doing puppet, chef and what not to customize our deployment infrastructure to run our software. That's not a common thing anymore for a lot of teams and I have not had to do stuff like that for quite some time. A lot of that work btw. involved working around packaging issues and distribution specific or distribution version specific issues.

I remember looking at the puppet package for installing ntp once and being horrified at the hundred lines of code needed to run something like that because of all the differences between platforms. Also such simple things like going from one centos version to the next was a non trivial issue because of all the automation dependencies on stuff that changed in some way (I remember doing the v5 to v6 at some point). Dealing with madness like that is a PITA I don't miss at all.

There's definitely some value in having something that is inert and unchanging for some companies that run software for longer times. Pretty much all the solutions I mentioned have LTS channels. E.g. If you want java 6 or 7 support, you can still get that. And practically speaking, when that support runs out I don't see how Debian would be in any way positioned to provide that in a meaningful way. The type of company caring about such things would likely not be running Debian but some version of Red Hat or something similarly conservative.


>It does not mean “does not crash”; it means “is inert, unchanging” which is important for enterprises that absolutely cannot risk that something stop working on an upgrade.

But would enterprises accept being forever stuck with any bugs that aren't security related? Even RHEL backports patches from newer kernels while maintaining kABI.


We're talking about entitites that run COBOL code from the 60s and are too afraid to update or replace it, for fear that something break.

There's a reason why most enterprise-oriented systems take inertia quite seriously — it has been something their customers greatly desire who are losing considerable capital on even minor downtime.


> Debian's claims to adding value in terms of security and stability to those vendor provided packages are IMHO dubious at best.

That’s not true. The idea is that the distribution is tested and stable as a whole and replacing something as OpenJDK can cause a lot breakage in other packages.

There is a reason why enterprise distributions provide support only for the limited set of packages that they ship.


Depends, if you install a statically linked version from a third party it won't create much headaches. That kind of is the point of vendoring and static linking: not making to much assumptions about what is there and what version it is in. Works great at the cost of a few extra MB, which in most cases is a complete non issue for the user.

Debian self-inflicts this breakage by trying to share libraries and dependencies between packages. That both locks you in to obsolete stuff and creates inflexibility. Third parties actively try to not have this problem. Debian is more flaky on this front than it technically needs to be.

Kind of the point of the article is that to vendor or not to vendor is a hot topic for Debian exactly because of this.


> There's zero advantage to sticking with the distribution specific versions of those packages which are typically out of date and come with distribution specific issues (including stability and security issues).

Uh, other than "apt install foo" versus "ok, let's go search for foo on the internet, skip that spam listing that Google sold ad space to, ok no I am on foo.net, let's find the one that corresponds to my computer…yeah amd64 Linux, rpms? no, I want debs…download, dpkg -i…oh wait I need libbar".


> But either way you lose a lot of the benefits that being officially in the Debian repos would bring.

The first thing I do when I hear about a new (to me) piece of software is an "apt-cache search $SOFTWARE". If it doesn't show up there, that's a red flag to me: this software isn't mature or stable enough to be trusted on my production machines.

Sure, I might go ahead and download it to play around with on my development machines, but for all the "I'm making it awesome!" arguments of developers, more often than not it's just an excuse for lack of discipline in development process.


This was exactly my concern. I believe that Debian packages should avoid vendoring when possible, but that means it must be possible to package the individual modules, even if there are multiple versions, and even if there are many small dependencies.


Agree no one is going to downgrade but there is another strategy - always build your app against package versions that is in Debian stable. Of-course it can be problematic but have some advantages: well tested, any bugs probably have documented workaround.


Yep that was my take on it as well. Vendors should be able to test their software against a vanilla install of debian stable and build the deb package themselves, and then upload to the package repository for review. Else, the vendor provides instructions/support on their external website.


Personally I believe that vendoring is just the lazy approach by developers that do not want to care about the eco system their software runs in. Consequently, their software will probably not be maintained for a long time (Red Hat offers 10 years of support, for instance). It's a shame but it seems like the cool kids simply tend to ignore sustainability in software development.

Since npm,pip,go,cargo, etc. are open source projects, would it not be simpler to add a "debian mode" to them? In that mode, the tool could collaborate with the system package manager and follow any policies the distribution might have.


> Personally I believe that vendoring is just the lazy approach by developers that do not want to care about the eco system their software runs in.

I have to agree. If your dependency is used by more than one package within the distribution, it should be split out and not vendored - ultimately, this will reduce the total workload on package maintainers.

This however does not mean that distros should be splitting every dependency into a separate package by default, much less have a separate package in the archive for each build configuration - there's no need to litter the distribution repo with idiosyncratic packages that will never have more than a single project/app/library depending on them, and that's precisely where a vendoring/bundling focused approach might be appropriate. Such dependencies are more like individual source-level or object files, that have always been "bundled" as part of a self-contained application.


The problem is that in the real world despite promises of semantic versioning, breaking often occurs which makes it impossible for two packages to depend on the same version of another library though in theory they should.


Debian used to manage and package programs from the real world.

It is obviously easier to create breaking changes that will not be discovered in dynamic languages. But in the real world, where everybody do unit tests, it is difficult to understand what kind of new problems were not already encountered 20 years ago.


Well Rust and npm happened in that time.

I don't have much experience with the latter but I know of the former that despite promises of semantic versioning, it is quite common for a crate to not work with a newer version, requiring one to hard depend on a specific version.

Add to that that Rust's definition of “non-breaking change” in theory can include changes that A) read to compilation errors or B) compile fine but lead to different behavior.


> If your dependency is used by more than one package within the distribution, it should be split out and not vendored...

That's like going back to square-one: Dependency Hell [1]. That's a regression, not a solution.

I believe the problem is not at the edges (OS packaging, software developers), but in the middle: The dependency handling in programming languages.

I.e. you need to be allowed to install multiple versions of a library under the same "environment" and activate the version you want at launch time (interpreted languages) or build time (compiled languages).

[1] https://en.wikipedia.org/wiki/Dependency_hell


Dependency hell is a result of not using sensible semantic versioning. If libraries are properly versioned, it's easy to make sure that each dependency uses the latest compatible version.


It's not like there's one ecosystem your software runs in. Sure, they could add a Debian mode. But you'd also need a Red Hat mode, Apple mode (and I don't know - do you need a homebrew mode that's different from the default mode?), Windows mode, etc. I think it's equally fair to say that the ecosystems just haven't solved the dependency management problem flexibly enough for everyone. Not everyone needs to or wants to support things for 10 years in order to have a Red Hat mode unless Red Hat is picking up that burden for them or paying them.


With scripting runtimes like ruby and python, it's already kind of "Debian mode", with a system-wide location for packages. With compiled languages that really prefer to statically link stuff written in that language (rust, go, etc) it's really not feasible.

In FreeBSD Ports we have a very pleasant solution for packaging rust apps. You just run `make cargo-crates` and paste the output into the makefile – boom, all cargo dependencies are now distfiles for the package.


> Personally I believe that vendoring is just the lazy approach by developers that do not want to care about the eco system their software runs in.

As someone with a foot in both worlds, allow me to try to put it more diplomatically: developers' job is to get into the weeds, pay attention to tiny little details, and know every piece of their code so well they often visualize the problem and fix in their head before even putting hands to keyboard. They don't want to have to deal with "other peoples' software."

System administrators (and by extension, distribution maintainers) have to take in the bigger picture: how will this affect stability? Resources? How will this package interact with other pieces of software? They have to consider use cases in everything from the embedded to insanely large clusters and clouds.

It would behoove developers to try to expand their awareness of how their software fits in with the rest of the world, if even for a short while. It's hard (I know), but in the end it will make you a better developer, and make your software better.

> Since npm,pip,go,cargo, etc. are open source projects, would it not be simpler to add a "debian mode" to them?

This is the best fucking idea I've heard in a while!


Cargo already has one: https://crates.io/crates/cargo-deb


If upstream has decided to vendor, I only see two sensible options:

* package the vendored software in Debian, and annotate the category of vendored packages so it's clear to the user they cannot follow the normal Debian policies. I've been bitten by the lack of such feedback wrt Firefox ESR. My frustration would have gone away completely if the package manager told me, "Hey, we don't have the volunteer energy to properly package this complex piece of software and its various dependencies. If you install it, it's your job to deal with any problems arising from the discrepancy between Debian's support period and Mozilla's support period." As it is, Debian's policy advertises a level of stability (or "inertia" as people on this thread seem to refer to it) that isn't supported by the exceptions it makes for what are probably two of the most popular packages-- Chromium and Firefox.

* do not package software that is vendored upstream

I can understand either route, and I'm sure there are reasonable arguments for either side.

What I cannot understand-- and what I find borderline manipulative-- is pretending there's some third option where Debian volunteers roll up their sleeves and spend massive amounts of their limited time/cognitive load manually fudging around with vendored software to get it in a state that matches Debian's general packaging policy. There's already been a story posted about two devs approaching what looked to me like burnout over their failed efforts to package the same piece of vendored software.

Edit: clarification


It's a very reasonable policy to require an ability to build everything offline without accessing language "native" repositories. But I think a big problem is that Debian requires that each library was a separate package.

For classic C/C++ libraries it's not a problem, since for historical reasons (lack of a good, standard language package manager and thus high-level of pain caused by additional dependencies) they had relatively big libraries. Meanwhile in new languages, good tooling (cargo, NPM, etc.) makes "micro-library" approach quite viable and convinient (to the point of abuse, see leftpad). And packaging an application with sometimes several hundred dependencies is clearly a Sisyphean task.

I think, that instead of vendoring, Debian should instead adopt a different packaging policy, which would allow them to package whole dependency trees into a single package. This should make it much easier for them to package applications written in Rust and similar languages.


Well, C/C++ historically had no separate dependency management making linux distributions effectively the de-facto package management for C/C++.

Other languages do have package managers and not using those is typically not a choice developers make.

I agree vendoring npm, maven, pip, etc. dependencies for the purpose of reusing them in other packages that need them (as opposed to just vendoring the correct versions with those packages) is something that probably adds negative value. It's just not worth the added complexity of trying to even make that work correctly. Also package locking is a thing with most of these package managers meaning that anything else by definition is the wrong version.


> I think, that instead of vendoring, Debian should instead adopt a different packaging policy, which would allow them to package whole dependency trees into a single package.

I'm not sure how this is different from what I call vendoring, and I think this is indeed the solution.

In Go, there's "go mod vendor" which automatically creates a tree called "vendor" with a copy of all the sources needed to build the application, and from that moment on, building the application transparently uses the vendored copy of all dependencies.

In my ideal world, Debian would run "go mod vendor" and bundle the resulting tree a source DEB package (notice that the binary DEB package would still be "vendored" because go embraces static linking anyway).

If the Debian maintainer of that application wants to "beat upstream" at releasing security fixes, they will have a monitor on those dependencies' security updates, and then whenever they want, update the required dependencies, revendor and ship the security update.

What I totally disagree with is having "go-crc16" as a Debian package. I'm not even sure who would benefit from that, surely not Go developers that will install packages through the go package manager and decide and test their own dependencies without even knowing what Debian is shipping.


> For classic C/C++ libraries it's not a problem, since for historical reasons (lack of a good, standard language package manager and thus high-level of pain caused by additional dependencies)

This is also one of the big reasons why header-only C++ libraries are so popular.


Speaking as a seasoned Node.js dev, if they think they can handle Node's nested vendored packaging system using flat debian packaging and guarantee correct behavior of the app they are sorely mistaken. It's a fools errand. The sheer amount of effort being proposed here is astounding.


If all you have is a hammer...

It's not the first time debian package policies seem backwards and trying to shove a square peg through a round hole. I hope the solution does not end up being "make APT do it" because APT is a terrible package manager to begin with (I hate every second that I had to fight APT over how to handle PIP packages that I would very much like installed globally).


apt is an incredible package manager.

Don't blame the hammer, blame the carpenter.

The problem here is packaging, and maintainer decisions. And yes, I'm familiar with the issues here, and the bugs filed. I think it was handled... improperly as well.


This futureshock is a result of the rapid pace of new features implemented within commonly used libraries and immediately used by devs. The rapid pace is good for commerce and servers but it's bad for the desktop. Commerce pays almost all the devs (except those wonderful people at debian) so futureshock will continue. The symptoms of this inbalance in development incentives versus user incentives express themselves as containerization and vendoring.


Fedora has separate packages for libraries. But for nodejs, packaging induvidual libs lead to a huge clusterfuck of difficult to maintain packages. Now theyve decided that nodejs based packages would have compiled/binary nodejs modules for now. https://fedoraproject.org/wiki/Changes/NodejsLibrariesBundle...


And for Golang, we try to unbundle, we have around 1,600 go libraries packaged. Some package are still bundled like k8s though due to depndency hell.


> Kali Linux, which is a Debian derivative that is focused on penetration testing and security auditing. Kali Linux does not have the same restrictions on downloading during builds that Debian has

The security auditing distribution has less auditable requirements around building packages?


This is the magic of "offensive" security, where you don't really bother with your own security posture :D

you know what they say, the cobbler's children are the worst shod ...


I think Kali is trying to offer the latest versions of each tool, because a pentest box that randomly breaks isn’t as serious as prod servers.


yeah but the challenge that's being referred to (I think) is that having an unauditable supply chain on a security distribution is, in itself a security risk.

You've got to imagine that Kali linux is a very tempting target for supply chain attacks, if you can compromise a load of security testers, you might get access to all sorts of information....


It's not supposed to have any information or allow compromising security testers - it's never supposed to be used as someone's personal machine, it is intended to be used as a read-only live image or a disposable VM; you spin it up, launch a tool, note the results and wipe the machine, going back to a known state.

As you say, supply chain attacks are very much possible especially because you're intentionally running various third party exploits and malware which you are not going to be able to vet - so you don't expect it to be secure, you don't even bother trying to secure it or trying to verify if it's been broken - you always treat it as something toxic that should be isolated and have limited, transient access to any data.


Gotta say I've seen many many long lived Kali VMs or laptops over the years. Whilst ideally ephemeral OS images would be great, not just for Kali, but for testing environments in general, that doesn't always meet reality.

This (pentest tooling) is one of the areas that seems a good fit with containerization (podman, Docker, lxc etc), as their use case fits nicely (single use ephemeral images with some isolation)


Of course. Kali is generally booted as a live image, and one runs the entire system as root.

It is most certainly not designed to be secure. This is expecting a battering ram to be resistant against being battering.


raesene9 has a point about someone getting malware into your bleeding edge pentest tool.


I believe that one lesson here is that it's not because it's now possible to have a thousand dependencies that you should have a thousand dependencies. It'll make your sysadmins very sad.

I don't want the latest libraries on my servers. I want my servers to be boring and not change often. I want them to run the time-proven, battle-tested and well-understood software, because I don't want to be the first to debug those. There are people better at that than me.

If, and only if, there's a blocker bug in a distro-provided package, I'll think of vendoring it in. And then only if there is no plausible workaround.

Of course, I also do testing against the latest stuff so I'm not caught off-guard when the future breaks my apps.


IMHO for these ecosystems we're seeing a swap in priorities between OS/distro and app - instead of having the server as the main unit, which provides certain libraries and certain apps, the approach is to have a box (very likely virtualised or containerised) that's essentially "SuperWebApp v1.23" and the server is only there to support that particular single app.

The server/os/distro/admin does not tell what library version the app should use; the app tells which library version it prefers, and either packages it with itself or pulls it at installation time. If something else needs a different version - then that something else should be somewhere else, isolated from the environment that's tailored for that app only. You don't go looking for the package of that app version for Debian release that you have; you don't try to run a 2025 version of the app on a 2021 long term support version of the distro, instead you choose the app version that you want to have, and pick the Debian (or something else) version that the particular version of the app wants;

Also, an app like that does not expect to be treated as a composition with a thousand dependencies, it wants to be treated as a monolith black box. If there's a bug (security or not) in a dependency of SuperWebApp v1.23, you treat it in exactly the same as if there's a bug in the app itself - you deploy the update that the app vendor provides. In that context, a long-term support OS is required for the things that the app itself does not want to support (e.g. kernel and core libraries) - the app developer is not upstream for the distro, instead the app developer includes a distro (likely a long-term support version) as an upstream dependency packaged with the app VM or container.

If you need to go from "SuperWebApp v1.23" to "SuperWebApp v1.24", then the server can be treated as disposable, and everything either replaced fully or transformed in a noncompatible way to fit the new requirements - because, after all, that app is the only app that determines what else the server should have. Cattle, not pets; replaceable, not cared for.


I haven't seen it mentioned in that discussion, but it's the vendoring is interesting from the reproducible builds point of view, especially the recent solarwind incident. The dependencies become one step removed from their upstream distribution and potentially patched. Tracking what you're actually running becomes a harder problem than just looking at package version.

With vendoring we'll see Debian security bulletins for X-1.2.3 which actually mean that the vendored Y-3.4.5 is vulnerable. And if you're monitoring some other vulnerabilities feed, Y will not show up as a package on your system at all.


I haven't been through a lot of comments here or at the link, but I'll bring up something I ran into in what I realize now was an early version of "vendoring": over a decade ago I was playing around with https://www.ros.org/, and there were no distribution packages, so I went with the vendor method, and I distinctly remember it downloading gobs of stuff and building it, only to break here and there. It was fucking terrible to work with and I only did it because it was R&D, not a production grade project, and I was being paid full time for it.

Vendoring "build" processes, IME, are incredibly prone to breakage, and that alone is reason I won't bother with them for a lot of production stuff. Debian is stable - I can "apt install $PACKAGE" and not have to worry about some random library being pulled from the latest GitHub version breaking the whole gorram build.


I'm surprised the option of moving the package to contrib got so little support. Many of these packages don't seem a good fit for Debian stable and its security-patch model.


contrib is for software that doesn't fit into a fully FOSS ecosystem. It's not for sidestepping security or quality concerns.

I wouldn't want to see FOSS with no proprietary dependencies stuffed into contrib because of packaging issues.


ArchiveBox is fully FOSS but is almost unpackagable on stable because it depends on a mix of both pip, packages, npm, and chromium (which is only distributed via snap).

The core value provided by ArchiveBox is the integration of these disparate tools into a single UX, so it's stuck in contrib/ppa for the foreseeable future.

This is just one example of a FOSS package that doesn't fit neatly into Debian's distribution model, but there are many others.


It is hard to audit DFSG compliance for software whose build process pulls in dependencies at run time.


then.. they should just make a new "vendored" repo for that kind of software ?


I recently returned to Debian after a long hiatus in Ubuntu. This time, I'm using Guix as my package manager.

It's a wonderful combo. Bleeding edge, reproducable, roll-backable, any version I choose, packages if I want them via Guix. Apt and the occasional .deb file as a fallback or for system services (nginx etc). And Debian as the no-bs, no-snap, solid foundation of everything.

To me this is the future.


Have you considered GuixSD (Guix as an OS instead of just package manager) instead of Debian?


I have.... it was a bit too much of a leap for me at the moment. And also I rely for work on proprietary software such as MS Teams which I wasn't sure I could install on GuixSD?


hmmm, this is your desktop or a server?


Desktop, to be fair.


Debian already provides multiple versions for Rust crates, I don't see why such an approach wouldn't be viable for Node.js packages. For example, Debian for nom crate provides versions 3, 4 and 5:

https://packages.debian.org/sid/librust-nom-3-dev

https://packages.debian.org/sid/librust-nom-4-dev

https://packages.debian.org/sid/librust-nom-dev


Rust is funny because it is perfectly possible to build a single binary that links multiple versions of the same library. Happens when a transitive dependency is written assuming the older API of a library.


I think it all boil down to manpower. Rust crates need a limited set of compat packages and have a way smaller ecosystem than nodejs. Node developers tend to use as many dependencies they can, resulting in hundreds of deps per app. Rust programs have generally less than 10 direct dependencies, and at worst less than a hundred indirect dependencies, so it is still manageable.


Nix seems to effectively have solved this, by more or less vendoring everything, but in a way that still allows shared usage. Having made a few deb and rom packages in my life, I don't miss it. At all.


This is a bit of a pet peeve with Linux packaging systems.

I want application X.

  $ sudo apt-get install appX

  This is going to instal 463 packages do you want to continue (y/n)?

  # HELL NO
Seems like every time a language starts to get popular, this is an issue until you have 8 or 9 sets of language tools piled up that you never use.


Look for a appX-minimal package and add —-no-install-recommends.

Don’t ask why these are not the defaults.


Look, I need that leftpad import, ok?


That's not even getting into NPM or node!


What if linux distributions stop packaging stuff altogether?

Most of the time it seems to create more trouble than it's worth (for the developers and mantainers of such distributions).

Maybe just provide a base system and package management tools but leave the packaging to third parties.

We can see that already with repositories such as EPEL and others more specialized.


Most about any distribution you can configure your own repositories.

Realistically, you could set up a minimal arch and host your own aur (there are projects for this). This is basically what the aur is.

Or Debian ppas if you’re looking for more self-contained bundles.

And there’s always gentoo.

I think what you want kind of exists and is in practice already :)

Personally I find A LOT of value in distributions and it’s an IOU’s that others do too - otherwise they wouldn’t have the significance they do today.


Related article (linked in this one) from earlier this year, about Kubernetes and its Go dependencies

https://news.ycombinator.com/item?id=24948591


Fine-grain depedencies are crucial, but vendoring is terrible.

Check out https://github.com/kolloch/crate2nix/ https://github.com/input-output-hk/haskell.nix for technical solutions to getting the best of both worlds.

Sorry, but there's just no way DPkg/APT and RPM/Yum are going to keep up here very well.


I hope Nix (or something like it) starts eating market share from other package managers, Docker, and the like. Nix solves this sort of thing at the cost of one of the cheapest things available, disk space. Every discussion about it mentions how complex it is; I remember giving up on creating a .deb after a few days of looking into that fractal of complexity, versus producing a Nix package within the first day of looking at the language.


I don't think nix solves this. You are still left with having to deal with security issues, updates and tracking on pr package basis instead of once for the entire ecosystem.

Admittedly, this is a hard problem. And the languages that does use vendoring makes it hard to programatically inspect all of this. But what do you do if say the python library requests has a severe HTTP parsing issue which allows ACE?

How many packages would you need to patch on nix?

How many packages would you need to patch on Debian, Arch, Fedora, OpenSuse?


I think you might be misunderstanding how packaging is handled in Nix. Nix devs use semi-automatic tools to convert packages from programming language ecosystems to Nix packages, but these tools still have means to properly apply patches where necessary. Whether the vendoring approach is used depends on the actual tools being used, but that is mostly irrelevant. Being able to apply patches to all intended packages is a requirement for any packaging tool because patching is absolutely essential for packaging work.

> How many packages would you need to patch on nix?

So to answer your question, you only need to change a single file. For the requests library, this one[1]. You might also be interested in how Nix manages patches for NPM packages[2]. The amount of manual fixes required is surprisingly few.

[1]: https://github.com/NixOS/nixpkgs/blob/master/pkgs/developmen... [2]: https://github.com/NixOS/nixpkgs/blob/master/pkgs/developmen...


> Whether the vendoring approach is used depends on the actual tools being used, but that is mostly irrelevant.

I don't think it is though? Because...

>So to answer your question, you only need to change a single file. For the requests library, this one[1]. You might also be interested in how Nix manages patches for NPM packages[2]. The amount of manual fixes required is surprisingly few.

Right, I assume python is easier in this scenario since there are not many cases where a python project would install N different versions of one package. I don't quite understand how these work if a python project depends on separate versions?

For the nodejs part I'm more curious. node_modules sometimes contain multiple versions of the same dependency, sometimes across multiple major versions. The patching in the files seems fairly trivial sed replacements and rpath rewrites. But how would security patches be applied across versions?

I also took a quick look at the go stuff, and it seems like there is no such thing there as `deleteVendor` defaults to false thus each Go application is self-contained. How would patching dependencies work here?

https://github.com/NixOS/nixpkgs/search?q=deleteVendor


> I don't quite understand how these work if a python project depends on separate versions?

For Python packages in the offical Nix repository, the packages AFAIK isn't auto-generated. In this case, Nix devs split out the common part of the package definition to resemble the following pseudocode:

    def commonDefinition(version):
        return {
            'src': 'http://...',
            'sha256': '000...',
            ...
        }

    packageV1 = commonDefintion(1)
    packageV2 = commonDefinition(2)
> For the nodejs part I'm more curious. ... But how would security patches be applied across versions?

I guess this was a bad example, as I incorrectly assumed it was patching dependencies when it wasn't. But you can though, by matching package names. The Nix language is powerful enough to do this.

> thus each Go application is self-contained

I wasn't aware of the go situation, but this does seem to be the case. However, this looks incidental rather than it being a hard requirement. Many tools provide mechanisms to centrally maintain patches, which would work whether or not vendoring is enabled.


I think this illustrates my point though. Nix doesn't necessarily solve the overarching issue of having vendored dependencies. And it doesn't seem like it's being worked on either. There might be work on this on a pr ecosystem basis, but this isn't necessarily a goal of nixos itself.

The intention here isn't to talk shit about nix though. I just wonder why people present it as being the solution to this issue.


Do you mean as a package maintainer or as an end user? I expect automation and reproducible builds to make this near trivial as a maintainer. As an end user binary diffs will be helpful (not sure if Nix supports them yet), but modern hardware and network connections can easily upgrade a thousand small packages in less than a minute.


Reproducible as in build environments or deterministic binaries? Nix has only reproducible build environments.

Package maintainer. For the end-user there is no practical difference between a container and nix, and you see how well the container ecosystem is currently handling security updates on their distributed images.

The problem is not distributing the fix, it's getting the fix patched.


Gentoo probably solves it better.

The developers allow multiple versions of the same library when there are problems and they have deemed it necessary, but libraries where it makes no real sense are not multislotted accordingly.

When developers realize that two packages need a different version of the same library due to issues that should not be, they multislot the library in response or if it be trivial patch whatever package relies on faulty behavior.


Everyone can provide multiple versions though, Arch does this. It works well to work around stuff in the existing ecosystem (e.g openssl1.0, gtk2/3/4, qt and so on). But I don't quite see how this solves anything for modern languages like Rust and Go?

Do you have any documentation for how this is dealt with in languages that utilizes vendoring to the extreme?


The difference with those systems is that the different versions are coded in the package name and that they only go so far to provide different versions of packages are that are designed to be installed as such under different sonames, because they're binary distributions.

On Gentoo, they are permitted to rename these libraries arbitrarily since software is compiled locally, so my Krita can be linked to Qt libraries with a different path than your Krita, so if ever Qt would break a.p.i. or a.b.i. despite it not updating it's soname to reflect that, Gentoo could elect to manually rename them and compile what need against the appropriate paths.


Yes. With docker you may as well forget about updating anything ever, you really don't know whats in there. Some insecure openssl/nginx versions? Who knows! Can't just update them.

The real issue with nix is amount of building that needs to take place. If you go to update openssl, everything dependent on it also needs to be rebuilt to get the latest. The difference in comparison to say static linking or docker though is that its easy to automate. Hydra is proof. It's also easy to continue to use shared libraries, sharing the storage and memory burden. This is unlike docker or everything using static linking.

Having a few threadrippers laying around to do all that building might be useful.


> The real issue with nix is amount of building that needs to take place.

Good point, but this is one of the best problems to have: it can be solved with modest amounts of money for build infrastructure, and it's unlikely to ever get orders of magnitude worse than it already is.


Debian stable offers unattended patches, which is something I value highly in public-facing server deployments.

I haven't seen anything like this even proposed for NixOS.

https://wiki.debian.org/UnattendedUpgrades


NixOS does have unattended automatic upgrades: https://nixos.org/manual/nixos/stable/#sec-upgrading-automat...


Mea culpa: I should have been aware of that.

I'm looking at https://status.nixos.org/

Are the 6-12 month old releases more stable than the 0-6 month releases? (i.e., is 20.03 more stable than 20.09?)


Maybe I was just unlucky... But when I tried Nix, the first thing that happened was that it did not deliver what it promised. The package was not properly isolated, so it ended up depending on something from /usr//bin. This was pretty disappointing for a first try.

Also, while this is a minor cosmetic thing, I highly dislike the location of the nix-store. It doesn't belong in /

and*, long path names (due to the hashes) are very impractical to use. All in all, I hear a lot of good things about Nix, esp. from the Haskell community, but whenever I try it, I develop a deeply rooted dislike for the implementation of the concept.


> the package was not properly isolated, so it ended up depending on something from /usr/bin.

I don't know when you last used Nix, but Nix now enforces sandboxed builds by default so it should be better at catching these kind of things during packaging. But note that isolation in Nix is mostly a build time thing, and it does not prevent running programs from accessing filesystem paths in /usr. You could still fire up a bash prompt and enter "ls /usr/bin", there's nothing stopping you from doing so.

> I highly dislike the location of the nix-store. It doesn't belong in /

I see many people express this sentiment, but I'm not sure what's wrong with /nix/store when you've mostly[1] abandoned /usr. Nix is fundamentally incompatible with traditional Unix directory layouts.

> long path names (due to the hashes) are very impractical to use

That's why you never need to specify them directly. You could either install packages globally and get it symlinked into /run/current-system/sw/bin or ~/.nix-profile/bin, both of which are included in PATH, or use nix-shell and direnv to automatically add packages to PATH whenever you enter a specific directory.

[1]: "Mostly," because /usr/bin/env is kept for compatibility


A friendly reminder that if you enjoyed this article, please consider subscribing to LWN. It's an excellent news source and they employ people full time so need real money in order to survive.

Normally articles are restricted to subscribers initially, and are made available to everyone a week after being posted. But subscribers can make Subscriber Links that let non-subscribers read a specific article straight away. I've noticed a lot of subscriber links (like this one) posted to HN recently - there's nothing wrong with that but, again, please remember to subscribe if you like them.


Jonathan Corbet (the founder and head cheese @ lwn) is an exceedingly nice guy and as a result of all of his work, ended up as a maintainer of much of the linux kernel documentation. He's one of those real unsung heroes of linux and also does things like the "Linux Kernel Development Report". Super good people and very professional all around.


I love that every year my LWN subscription expires and they don't auto renew it. Its such a nice way to treat your customers that differentiates LWN from most other media companies.


LWN is my favorite paid subscription of all time. It feels as if you are making the world a better place when you type in the credit card number to renew.

I paid for another year just this morning, so the sensation is fresh.


Do they have a bitcoin/monero donation address? I'd donate today.


"One package manger to rule them all"... pip/gem/npm/cargo/cabal&stack/whatnot all pose the same issue. From a distributors point of view, I get why you dont want to solve all these problems. From a user point of view, there is no good reason why I should learn more then one package manager.

Itsa a dilemma. A similar thing is happening with ~/.local/

When I started with Linux 25 years ago, it was totally normal to know and do the configure/make/make install dance. 10 years later, most of what I wanted to use was available as a packge. And if it wasn't, I actually built one. These days, I have a 3G ~/.local/ directory. It has become normal again to build stuff locally and just drop it into ~/.local/. And in fact, sometimes it is way easier to do so then to try and find a current version packaged by your distribution.


>totally normal to know and do the configure/make/make install dance

I always cringe when the readme starts with something something Docker. "Don't try building this yourself", is an unfortunate norm for opensrc.


I usually give up on the software if the INSTALL file only has references to Docker.

A few times I try really hard not giving up, but the reality is that Docker-only is highly correlated with non-working, so up to now I have eventually given up on every piece of Docker-only software I have met.


Same here. There are exceptions to everything, but to me it feels like if someone distributes their work via Docker they are actually not on top of the complexity they are trying to deal with. Pushing that complexity into something like a Docker image doesn't make it go away. At best, a bit chunk of bloat is generated with at least trashes your disk cache. At worst, a big hunk of unmaintained software is waiting in a corner to be taken apart and hacked to pieces.

Docker images have an air of really big bit graves.


> These days, I have a 3G ~/.local/ directory. It has become normal again to build stuff locally and just drop it into ~/.local/. And in fact, sometimes it is way easier to do so then to try and find a current version packaged by your distribution.

That's great; now move it to your production webserver where the user the server runs as doesn't have a "home" directory, or you run in to a dozen other reasonable security restrictions that break the tiny world that vendoring was never tested outside of.


Indeed the fact that, in many cases, one has to compile software themselves just to get a reasonably recent version of it is one of Linux's most colossal and inexcusable failures.

Fortunately people are starting to come around to things like AppImage, FlatPak, and (to a lesser extent) Docker in order to deal with it.


I start with the assumption that Node.js is, itself, not fixable at the distro level, end of story.

But, what about the rest? The problem to solve is things like Go packages that want to static-link their dependencies.

One way forward is to consider programs that want to static-link as being, effectively, scripts. So, like scripts, they are not packaged as complete executables. Their dependencies do not refer to a specific version, but provide a link to where to get any specific version wanted. The thing in /usr/bin is a script that checks for a cached build and, if needed: invokes the build tool, which uses local copies where found and usable, and downloads updates where needed, and links.

A package- or dist-upgrade doesn't pull new versions of dependencies; it just notes where local, cached copies are stale, and flushes. On next execution -- exactly as with scripts -- invalidated builds are scrapped, and rebuilt.

It means that to use a Go program, you need a Go toolchain, but that is not a large burden.

It means that the responsibility of the package system is only to flush caches when breaking or security updates happen. The target itself arranges to be able to start up quickly, on the 2nd run, much like the Python packages we already deal with.


It’s unsustainable to expect package maintainers to create packages and backport security fixes for every piece of software in existence, and big ecosystems like Node.js make this blindingly obvious.

Developers should be able to just ship software directly to users, without package maintainers standing in the middle.

Hopefully Snap or Flatpak solves this!


I don‘t understand why debian wants to package nodejs libraries? There is already a package manager for nodejs. Why do they have to package it again? The same applies for php or python libraries, both ecosystems do have their own package managers.


They don’t want twenty .deb node apps to depend on twenty different versions of the same library, because backporting security fixes to each of them years from now could be a nightmare.


Why is Debian responsible for backporting security fixes for the thousands of deb packages available? Shouldn't that responsibility be handed to the package authors/maintainers themselves?


If I understand correctly, Debian decides to deliver a certain version of a package (say v1.2.3) for a certain Debian version and they generally keep that version of the package fixed (or try to stay 100% compatible with it), minus security/major impact bugs which get backported. By doing this, Debian can ensure that when you upgrade the system, nothing breaks.

While it's not uncommon for upstreams to offer a stable or LTS channel that mostly works like this (and generally stable distributions decide to package this version), the whole value of Debian is to offer a layer on top of many upstreams with different speeds / practices / release policies / etc. and offer you a system that works well together and doesn't break. So the work/backports they need to do is mostly related to different upstreams working differently or in a way that doesn't allow Debian to stay pinned on a certain version.


Debian Developer here: upstream developer almost never care to prepare fixes for existing releases.


If an upstream dev isn’t supporting software anymore maybe the users should stop using it?


Here's the problem. Say I develop program/library/whatever Foo.

I make a new release every six months, so we have Foo 1, six months later Foo 2, six months after that Foo 3, and so on.

Between the time Foo n and Foo n+1 are released, I'll release minor updates to Foo n to fix bugs, and maybe even add minor features, but I don't make breaking changes.

Foo n+1 can have break changes, and once Foo n+1 is out I stop doing bug fixes on Foo n. My policy is that once Foo n+1 is out, Foo n is frozen. If Foo n has a bug, move to Foo n+1.

A new version of Debian comes out, and includes Foo n which is the current Foo at the time. Debian supports new versions for typically 3 years, and the Debian LTS project typically adds another 2 years of security support on top of that.

That version of Debian is not going to update to Foo n+1, n+2, n+3, etc., as I release them, because they have breaking changes. A key point of Debian stable is that updates don't break it. That means it is going to stay on Foo n all 3 years, and then for the 2 after that when the LTS project is maintaining it.

That means that Debian and later Debian LTS ends up backporting security fixes from Foo n+1 and later to Foo n.


> Debian supports new versions for typically 3 years, and the Debian LTS project typically adds another 2 years of security support on top of that.

I don’t think Debian should try to do this. They should just ship whatever the current release of upstream is. Or better yet, just allow upstream to ship directly to users.


The whole point of a stable release is to be, well, stable.

People want to be able to develop their things they need, such as their organization's website, online store, internal email system, support system, and things like that, deploy them, and then get on with doing whatever the organization was organized to do.

They don't want to have to be constantly fiddling with all those things to keep them working. They want to build them, deploy them, and then not have to spend much effort on them until they want to add new features.


Debian are the package maintainers.


Right, and therein lies the rub. Unless Debian wants to "boil the ocean" and burn a ludicrous amount of effort on repackaging every last npm and pip package for Debian, that position seems unsustainable long-term.

I'm of the opinion that Debian should create more distribution channels (that are available by default) where they are not the maintainers, and old releases are not forced to stay patched in order to remain installable.


They're talking about "application projects" which I understood as actual programs, not libraries. As a user I don't care if the tools I'm using are written in C, Python or JS, so I shouldn't have to remember which package manager to use, Debian should include them all.


> Why do they have to package it again?

Many of those package managers -- like npm and pip -- will require a compiler toolchain to build/install packages with native code components, like packages that bind against C libraries. That isn't an acceptable requirement.


Apps. They have distribute bunch of tools written in does language. For example an electron app for nodejs. On top of that bunch language PKG manager are not made for final distribution, mainly for development. Pip does not have an uninstall. This is worse in Lang's like rust, go, Haskell, nodejs where ecosystem design is not really compatible policy of these distros. And rust and nodejs comes in wierd locations so it can eventually needed in base system.


Note "SubscriberLink" and "The following subscription-only content has been made available to you by an LWN subscriber."

You are likely not supposed to post it to HN


https://lwn.net/op/FAQ.lwn says

> Where is it appropriate to post a subscriber link?

> Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared.


Thanks! I was unaware about such approach, thanks for clarification!

"subscription-only content" confused me and I failed to parse "has been made available to you by an LWN subscriber".

It is the first time when I see such approach explicitly encouraged and I really like it. Hopefully it is going well for LWN.


I'm not entirely sure the security argument makes sense here.

If a library is API compatible, does it matter if it's been vendored or not? If it's not vendored, you release the new build and be done. But if it's been vendored into 20 packages, you just need to bump the vendored version & rebuild those packages.

The languages we are discussing where vendoring is common have simple build processes, and well defined dependency management mechanisms (go.mod, package.json). So it's not difficult to bump the version of a dependency and rebuild a package in those languages. A large part of the work here should even be able to be automated.


> The languages we are discussing where vendoring is common have simple build processes

For most packages, yes. But then you've got kubernetes. Or openstack. Or keras/tensorflow stack. They are significantly harder to deal with than anything else and essentially could build their own distributions around themselves.

Or pandas+scipy+numpy+mpl which lots of people just give up on and use conda.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: