Hacker News new | past | comments | ask | show | jobs | submit login
Fedora opens up to bundling (lwn.net)
70 points by perlgeek on Oct 21, 2015 | hide | past | favorite | 45 comments



This is an unfortunate development. It's tough to fight against the tide of developers that simply don't care or honestly believe it is a good thing, but it's a fight that is worth it for better security and easier maintenance of GNU/Linux systems, which directly benefits users. Thankfully, Debian hasn't given up, nor has GuixSD that I help maintain.

I asked Tom Callaway from Red Hat about it and he said "I'm not a fan, I think its a poor decision, but I also appreciate that I might be in the minority these days." [0]

Hopefully, once enough people have been burned by the apparent convenience of bundling, we'll see the tide change. Maybe after Dockerization has run its course.

[0] https://twitter.com/spotrh/status/656677002028691456


I disagree that bundling is a bad thing. Shared libraries are just as bad as a solution as the windows DLL hell. With shared libraries, you have to trust outside developers will not change function signatures or break your dependencies in some way. You have no recourse if this happens, and it does happen all too often.

It seems like an impossible mandate to ask a developer to support updating their code against new signatures and updated libraries for eternity, especially when their project is finished and feature-complete and they've moved onto newer projects. Developers bundle because we can't trust other developers not to break the rules, and are tired of the deathmarch of fixes.


The instability of libraries is enabled by the bundling culture of the ecosystem.

System libraries are extremely API and ABI stable, including libc, libz, libpng, xlib, gtk+2, glib2, etc.

npm enables bundled libraries to have their own bundled libraries, and also encourages trivially small "libraries", so the explosion of sub-dependencies is entirely unmanageable and the best you can do is let every lib bundle its own copy of every other lib and try to forget about it all.

But long-lived large linux distros, with useful applications like firefox and gimp, and a large number of common supporting libraries, show that it doesn't have to be that way. You can have just one zlib, you can have just one gtk+2, you can install bugfix and security updates for them.


API and ABI stability are good, but also no guarantee of non-breakage.

A glibc change to the memcpy implementation broke hundreds of programs a few years back. Note that it is conceivable that there were security vulnerabilities introduced by this change.


It's funny how you listed e.g. libpng but not libjpeg which breaks the ABI very often.


libjpeg is silly, but that's why everyone uses libjpeg-turbo now, which maintains API/ABI compatibility with libjpeg v8

http://www.libjpeg-turbo.org/About/Jpeg-9

And if it comes to having a library or two with two ABIs/APIs, like gtk2 and gtk3, or libjpeg7 and libjpeg8, that's not a big deal - those two lines have different "sonames", and it's just those two versions to manage globally, rather than a per-app multitude.


>With shared libraries, you have to trust outside developers will not change function signatures or break your dependencies in some way.

With bundled libraries, the user has to trust outside developers, that didn't write those libraries, to stay on top of critical security updates to those libraries and releases new versions of their software with the relevant patches. This is completely unsustainable, both for the developer and the user.


Both solutions have flaws. This is a no-win, but it's worth understanding that neither solution is a silver bullet. I commented because the anti-bundling camp seems to think there are no benefits to bundling, and that is obviously false.

Yes, both solutions are unsustainable. Finding a better solution is where the discussion should be, not nitpicking which side is less broken.


The issues with bundling far outweigh the benefits.


I disagree.


Become a distribution maintainer and you will change your mind.


While I'm not a fan of this development either, the position w.r.t. darktable was frankly unreasonable: the library in question is not meant to be a system library. Indeed I believe also digikam had similar problems. Having a modicum of flexibility in this cases may be positive.


It was a very contentious decision, causing a huge ruckus on the mailing list. I for one don't agree with it at all.

The only small blessing is that RPM metadata will contain:

    Provides: bundled(crappy-library)
so it's possible automatically to determine what embeds that library should a security bug arise. (It doesn't make it easier to fix the N copies of that library of course).


Has Debian really not given up?

Bundling is why chromium is (still) not in Fedora's repos[0], yet Debian has shipped it for years. I've assumed Debian was more lax on bundling or build reproducability issues - is there another reason?

0 - https://fedoraproject.org/wiki/Chromium


Currently, Debian is probably the biggest proponent of reproducible builds.

https://reproducible.debian.net/reproducible.html


This isn't really a Docker thing though, is it? It is talking about the case of a program that includes its dependencies in its own source code tree. I think docker-ized programs would still link against e.g. the system libssl rather than ship a copy of an ssl library with a program's source. Another term sometimes used is "vendoring".

A lot of Java programs ship jar files in their source tarballs, it has traditionally been a lot of work for Debian devs to pick these apart. Similarly, many "things" (programs or web services) that use javascript libraries often ship minified versions of common stuff like jquery rather than use the system version. It's quite a mess. I think a lot of it stems from the fact that traditionally these sorts of libraries (jars, javascript) have not been well packaged or even packaged at all. The program authors are making life easier for the majority by shipping all the deps together. It's not good for distros, but I can see the advantage.

I think subversion has a nice work around for this - they include a script to download the dependencies if you need them, otherwise the default is to link vs system deps.


The problem starts when you consider things like kerberos, where MIT Kerberos and Heimdal both implement the same basic kerberos api, but have mutually incompatible portions of the libraries -- there are functions that take different arguments depending on which library you're talking about, etc. So, programs that want to implement kerberos authorization need to either put in a bunch of ifdefs and the like to handle this, say that their program's just not going to work with one or the other, or bundle their own known good version. Or look at sqlite, where everyone and their brother ships their own version because there are a lot of defaults and settings which can affect behaviors that may be beneficial to an application.

Bundling is a necessity because software interactions are complex, and sometimes developers get tired of having to field support requests because packagers build programs with silly options. Including a known version of a library allows a developer to pin down the behavior a lot more, which loosens the burden on them because they don't need to worry about how Debian or Guix is going to screw up their programs.


Why can't you just have both libraries installed?

I agree about sqlite though, it's meant to be directly embedded and configured for your application.


I see it as more of a security issue.

If your distro is all about security then bundling is really bad and you shouldn't allow it if your distro is about allowing end users to use whatever software they like regardless of how it is packaged then you shouldn't care.

Just warm your users of the potential dangers installing the package may cause and move on.


Yeah, warn your users - "Please note your security is at risk...<click>

No wait, wait, come back you're choosing to give up your security..."


Folks, please read the article. This has nothing to do with Docker.

It's referring to bundling as part of RPMs, i.e. on having application packages bundle some or all of its dependencies along with the application. This is generally frowned upon, and with good reason, but it's being practiced with an increasing frequency.

The Fedora devs have had a lot of problems with this over the years. More recently, libraries like rawspeed have sprouted everywhere -- they are meant to be used only as bundled libraries.

So they either had to change the rule and allow bundling, or to find themselves unable to distribute applications that are otherwise useful to developers, I guess. They "opened up to bundling" in that they introduced a new provision which allows bundling in cases where there's no alternative, but the RPM has to be marked as including bundled dependencies.

Don't get me wrong, I'm all for bashing Red Hat's perpetual beta crap distro, but this time they got it right. Really. It's a sensible decision, not some wavefront of Linux innovation bullshit.


Policies that are out of line with reality are bad policy: the war on drugs does not fix drug abuse, vagrancy laws do not fix poverty, and the war on bundling merely ensures that bundled software goes unreported.

The metaphor doesn't pan out. The third is canonizing a technical error.


That's an opinion. There's a whole debate on bundling vs sharing, and neither opinion is canonically correct. That said, it is only a 'technical error' in your view that shared libraries are the only way.


Bundling involves shared libraries. I'm not sure where you came up with this distinction.


In this entire post, bundling refers to software that includes a specific version of each library in its own binary, while shared refers to software that uses the system version and requires dependencies to be installed out of band (possibly by the package manager). They are competing ideologies.


Both approaches involve shared libraries, which are simply dynamically linked objects. The article outlines a problem of distribution, not linking strategies. You're using shared libraries in both contexts.

This is a misuse of well-established terminology. To say this is an ideological issue is a balance fallacy, as the package management approach has long been shown superior (by the likes of e.g. Nix and Guix).


Both sides have their arguments, as outlined by https://news.ycombinator.com/item?id=10425913, so it's really not a black and white 'technical error' issue.


I have worked on several large applications where libraries were customized and bundled in. We would have been better off in the long term implementing the small delta we needed from the library in our application. In every case I saw, it was just an example of lazy engineering that led us to bundling.


Why Docker, why not RPMs they are not that hard to build. They have years design an work behind them. I hate bloat. 20 years ago I could get the same work done today in 1000th the memory and disk space.


I've used (servers and laptops and desktops) Fedora and Ubuntu for many years. Since the advent of package management, Fedora has been significantly more "just works" when it comes to anything slightly professional or complicated (of course Ubuntu has had the edge on personal multimedia), and I'll bet this practice of discouraging bundling is a huge part of that. On the other hand, Fedora usage is falling, proportionally, right? I'm not sure, but it seems like it. Anyway: tough call.


> ...and I'll bet this practice of discouraging bundling is a huge part of that.

Your bet would be wrong. Debian also discourages bundling[1], and so does Ubuntu[2]. The reason the policies look similar is that Ubuntu policy is derived from Debian policy.

[1] https://www.debian.org/doc/debian-policy/ch-source.html#s-em... [2] http://people.canonical.com/~cjwatson/ubuntu-policy/policy.h...



Fedora usage is falling probably because many previous fedora users are a bit jaded with RedHat, the way systemd has been shimmed into everything- there's no benefit to using fedora over Arch linux anymore, you get more control and you don't have to fight the system to get anything technical done.

Or, they went to the UNICEs, which is what I did.

(this is anecdotal, I only know a dozen or so fedora users and they all jumped ship recently because they felt the offering is sub-par now)


Fedora usage is falling probably because many previous fedora users are a bit jaded with RedHat, the way systemd has been shimmed into everything

Or you could appreciate that Red Hat funds so many great projects and cares about the future of Linux.

there's no benefit to using fedora over Arch linux anymore

Fedora Workstation is easy to install and has a usable default desktop. No need to tinker with the OS, you can be productive very quickly. Last time I checked, Arch didn't even have an installer.


what use is an installer when your OS never breaks?

the only reason I'm so familiar with anaconda is that my system would break in random weird and inexplicable ways on occasion.


Anecdotally, I switched from Fedora to Arch for personal use because I was tired of my system breaking every 6 months. Two years on, and I've spent a great deal less time on system administration than I would have if I'd stayed with Fedora. Once you get past installing Arch for the very first time, it gets really easy.


Fedora usage is failing because they discourage packagers, so it is hard to add new packages to Fedora, so when user needs a package it will look for an other distro.


I left all things RPM in 2003. All developments since have not proven that decision to be in error.

With the promotion of NetworkManager, PulseAudio, systemd, and now these bundling practices, we shouldn't even be calling this a GNU distribution anymore. it's Redhat's Job-security-by-obscurity stack that happens to be running on a linux kernel. And all distributions that adhere to this new standard base (arch, debian, etc) should be considered to that same definition.


Absolutely could not agree more.


After your restriction there are not many distros left. Slackware user?


Gentoo user, but Slackware is definitely an option still. Linux Mint Debian Edition still doesn't use systemd. nor does Morpheus Linux.

But yes, there aren't very many GNU/Linux distros left.


I think this makes it exceptionally hard for a distro with languages like golang, where vendoring libraries is the norm vs the exception.

There were some serious issues with this on the golang-nuts fedora ML some time ago where lsm5 was lamenting about the issues Fedora faces when upstream simply won't remove vendored libs.


Go in many ways is a bold step backwards in software engineering.


Why ?


Seems to fix the wrong problem to me.

The major issue, as best i can tell, is that most package managers can't handle having multiple minor versions of the same lib installed side by side. This because they sort on name-version basis.

Heck, even with major versions the separation is usually a hack by putting the major version number into the name part of the package id (name1-version, name2-version, etc).

Thus you get conflicts when you want to install name-version and name-version+1 at the same time.

This is not a Linux problem though, as on the OS level the libs are separated using the soname system.

https://en.wikipedia.org/wiki/Soname




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: