Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The end of the Iceweasel Age (2016) (lwn.net)
45 points by beefhash on Dec 20, 2017 | hide | past | favorite | 11 comments


It will be interesting to see how this hits everyone in the Deb libre downstream - Parabola provides only Ice* currently. Could we even expect users to request that Iceweasel as a look sticks around, for consistency?


Rstudio would presumably present similar issues today?


RStudio isn't in official Debian Repos for this very reason. Also the reason why RStudio's downloads are hosted on their site


What is that reason? They enforce strict guidelines about using their logo and name that are incompatible with Debian policy?


The gist of it is Debian patches everything to bring software into alignment with Debian philosophy. They back port bugfixes, correct weird path usage, and generally make the software play nicely with the distro.

Some developers are bothered by this patching and insist any patched version of the software is no longer the original software, so they make life difficult for the distro by going after marks and art.


Is there a requirement for the Debian developer doing the patching to get it accepted upstream or at least get upstream review before patching?


When possible, DDs do submit patches upstream, but this isn't always possible or appropriate.

Examples of unlikely to be upstreamed patches: Sometimes upstream software was explicitly developed as a tool for another distro, so Debian patches it such that it is usable/applicable to Debian.

Sometimes upstream software will vendor libraries otherwise shipped with Debian. Debian will often remove the vendored version and patch to link against the system package.

Sometimes upstream software has tests that fail within Debian's build system, so Debian will patch the software to build/test under their build system.

Sometimes Debian disagrees with the default preferences set by upstream software, and will patch in new default preferences.

In a surprising number of these cases, upstream developers can become super hostile. And it's kinda-sorta understandable.

When Firefox or RStudio breaks, users don't think to file a bug with their distro, or reach out to the distro maintainers for support (they should!). Instead they reach out directly to the upstream developers, forcing the upstream dev to field all sorts of support requests, and grow to hate the downstream distro maintainers.

The depressing thing is, we really really really need both of these roles -- upstream developers, and downstream maintainers. A separation of concerns between "does the latest version of this code work correctly" and "is this code reliably built/installed/configured in a target environment". Unfortunately we seem to be moving away from it by allowing upstream to shodily craft and ship an entire rootfs in the form of a "container", and declaring it deployment ready.


> Sometimes upstream software has tests that fail within Debian's build system, so Debian will patch the software to build/test under their build system.

This is the source of my question, as one of the worst Debian bugs ever[1] happened due to this policy.

> In a surprising number of these cases, upstream developers can become super hostile. And it's kinda-sorta understandable.

I believe in the case I mentioned above an upstream user stated after the fact that the entire developer-base would have been super hostile to that patch. That hostility would have been quite valuable to the Debian community.

So I take it there is still no requirement for Debian "patchers" to contact upstream prior to patching?

https://www.schneier.com/blog/archives/2008/05/random_number...


Is containerization common outside of things like services? To me it makes a lot of sense e.g. "I want to run Jenkins, just spin up a container" because it reduces the number of configurations that the developers have to consider. But it seems like the majority of packages will be individual libraries and tools.


Containers are frequently being used to bundle up an artifact for end users -- this means they're always going to be service/application, as opposed to libraries (which get crammed inside the container). Those libraries still get developed, and still get installed inside the container even if they aren't the "direct" product. Only now, as a sysadmin, I can't easily audit the versions of those packages inside the container, because I have no guarantees on how they were installed. One of the 18 Dockerfiles between `scratch` and `some_application` might install a few bits of software from distro packages from whatever rootfs they decided to base off of. The next layer may download a tarball and extract it over the rootfs to throw down a few more of the app's dependencies.

When using software packaged by a distribution like Debian, I'm given pretty strong guarantees that I can always audit the version and integrity of installed software, and that when a bug is fixed in a common library coughopensslcough, I don't have to spend as much time fretting about the places a vendored vulnerable version might be hiding in my infrastructure.

That said, I don't mean to disparage use of containers in general: I'm a huge fan of them, I just feel pretty strongly they should be a software deployment tool as opposed to a distribution tool. I get a bit uncomfortable when I can't consistently generate an image "FROM scratch".


Thanks. How does that play out in the case of Rstudio?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: