Hacker News new | past | comments | ask | show | jobs | submit login
"Why Linux Sucks" and "Why Linux Does Not Suck" videos from Linux Fest (lunduke.com)
85 points by BryanLunduke on April 29, 2012 | hide | past | favorite | 27 comments



Keep in mind that the points in the second are the same as the points in the first, just with a different perspective.

I think this mostly works OK except for one area: package format. The disparate package management in Linux is indefensible because it weakens the entire development ecosystem. One thing that hugely hurts the package management in Linux is that package format disparity means that the tooling for package management is far inferior to what it could be. The Linux package managers could be the single greatest tool for developers, but instead we have increased fracture over the entire ecosystem.

One of the biggest problems is the inability for the package managers to integrate with language-specific packages. Gem/pip/cabal cause an extremely obnoxious fracture because they all have dependencies which can't be tracked without an OS package manager but at the same time the OS package manager can't integrate with them. If Linux could decide on a single package format and packaging tool then we could all start working on tooling to support integration of the different language packages. However, as long as there are still 3-4 major package formats and packaging tools that's not going to happen.

Added to that are the package managers reliance on specific OS constructs. Homebrew works but we really shouldn't have to use different package managers just because we're on a different operating system. Either the package supports that OS or it doesn't, but this shouldn't say anything about the package manager.

Aside from all of this is the difficulty in actually constructing the packages. Arch PKGBUILDs seem the closest I've used to an easy source->binary package but they're not mainstream. As far as I know debs still don't support natively bundling the source with the binaries and the RPM SPEC system is the single unfriendliest system I've encountered to quickly make a portable package. Moreover, the ties to shell script just continue to harm OS independent adoption.

I'm not sure what the answer is but I'm starting to think the only acceptable solution is a complete severing of Linux distributions with package management.


Speaking as a user rather than a dev, even the current package management system is better than what we have on Windows. On linux, my libraries are six months out of date because that's how long it takes the community to add them to the repositories. On windows, my libraries are anywhere from two to six years out of date because that's the version that's distributed with each program, and my programs are anywhere from six to twenty months out of date because that's how often I reinstall my OS and go looking for new installers. I'll take Linux for now. It could be better, but it also could be worse.


As a user, the best is that each program is self contained.

There are a few issues with code duplication, but at least it never breaks.


If there's every a security bug in a library, you're left vulnerable until the patch is issued, and then every individual maintainer patches their binaries, and you get around to installing them. With dynamic linking, you just have to patch the one lib package and you're done.


Or you end up breaking all installed applications as it is commonly the case.


The other nice thing about packaging systems is that you can choose how fast you want things to be updated. If you're having trying to install Gentoo on a toaster oven, things will break; if you install Debian stable, you'll never have to worry about an update again.


On a somewhat related note: Why do 99% of the things that are packaged by Ubuntu (or any distribution) need to be packaged at all, let alone by the distribution?

A lot of the lack of focus (and fragmentation) in Linux comes from people duplicating each others' work. There is absolutely no need to have hundreds - thousands? - of developers spending their valuable time packaging up software when the app developers could do it just as well - or better, on account of knowing the software that they are packaging. People who work or volunteer for the OS developer should instead be writing new features for (and generally improving) the user experience of their respective distributions, not taking other peoples' software and making it work like it should already.

Two words: app bundle. Developers handle their own packaging. Bundles include dependencies. The end.


> There is absolutely no need to have hundreds - thousands? - of developers spending their valuable time packaging up software when the app developers could do it just as well - or better, on account of knowing the software that they are packaging.

1. Because that puts extra work on the developers who don't necessarily know the target system very well. What if the developer doesn't know the FHS (which apparently is quite common, given what I've seen...)? How are you going to manage dependencies if the packaging systems differ widely (and no, there's no one way to bridge them all, because some of them differ because of the whole construction of the distribution itself!).

2. They'll never do it.

(#1 leads to #2, but seriously, it's enough work getting developers to write a proper Makefile/gemfile/setup.py or whatever is appropriate for the language they're working in. Now you want them to do it 100 times over, for systems that they've probably never used before?)

That works for something like sta.li, in which everything is statically linked, because then you have no problem. But when you start talking about dependency tracking and small-but-important configuration differences between distributions, this becomes impossible.

The solution is to use a two-tiered approach, which some distributions sort of already do. If I make a Python program, for example, I create a setup.py file and all of the other things that you need to install it with pip (sidenote: this is embarassingly complicated for a language that's dead-simple otherwise!). Then, I let each distro's community handle it themselves - after all, they know their system better than I do, and remember that I've already listed all dependencies, etc. because I've packaged the program itself properly!

Then, the distribution can figure out how to handle those packages - perhaps installing them directly through pip is the best solution, or perhaps they add an extra layer the language-specific package (check out the AUR for something similar to this), or perhaps they want to just hard-code the file destinations. Whatever makes the most sense in the context of the distribution itself.


And who packages glibc? Or udev? Or zlib (which is so quirky that it almost doesn't build at all out of the upstream source). That's where the packaging bandwidth is spent in the distributions: middleware. Certainly not "apps", which are generally trivial to build as long as you have the dependencies correct. Take a look at, for example, the Fedora spec files for the software you use vs. the middleware and see the difference in complexity.

And note that Android's (which I assume is what you're referring to when you talk about "app bundles") middleware packaging is no less complicated (honestly it's a lot more so in a lot of ways -- no firm dependency tracking, everything must build all at once). Check the AOSP "external" tree.


I'd expect that like other OSes core dependencies would still be managed by the distributor. But these are nothing but the barest essentials to making the OS work. I count udev, and glibc, and zlib among these. Mac OS X does this. But these are very few in number compared to the number of total packages that are stored on, for example, Ubuntu's repositories.

If the price of having apps that just download and work on any distribution is having a more complicated middleware system, then I'll take it.


With bundles I don't know if the software has been tested on my architecture, if it's stable, if I'm downloading a botnet, if it comes with bundled libraries or if it works properly with -fpie and --as-needed.

Then you have software that doesn't have official releases (like half the stuff on github)

I'll stop using a package manager the day that all ruby gems work with MRI 1.8, 1.9, JRuby and REE. Basically, never.


I think the entire idea of "packages" is fundamentally broken on some level. The problem is that defining yourself in terms of your dependencies is a a very fragile position to be in.

And its basically not necessary -- you can just include your dependencies with your program. Statically link it and make a fat binary. The whole notion of packages seems like it's optimizing for hard drive space and bandwidth, which is rather silly when you consider that there's an abundance of both.

I'd rather software just come as self-contained as possible and avoid the entire packaging system altogether. If there's duplication and a waste of resources so be it.


> Added to that are the package managers reliance on specific OS constructs. Homebrew works but we really shouldn't have to use different package managers just because we're on a different operating system. Either the package supports that OS or it doesn't, but this shouldn't say anything about the package manager.

I don't really follow your logic. There are very good reasons why you wouldn't be able to install the same package in the same way on Linux and OS X. (For starters, homebrew isn't true package manager in that it's not a one-stop shop - it can't even handle critical system updates like apt-get/pacman can!)

> Aside from all of this is the difficulty in actually constructing the packages. Arch PKGBUILDs seem the closest I've used to an easy source->binary package but they're not mainstream. As far as I know debs still don't support natively bundling the source with the binaries and the RPM SPEC system is the single unfriendliest system I've encountered to quickly make a portable package.

I think PKGBUILDs are the best of both worlds, because they allow system-specific package management to coexist with application-level packaging - two related, but subtly different problems. PKGBUILDs are general-purpose enough that they can do pretty much whatever you need them to do, assuming that they're well-written (which is our assumption when comparing any packaging method), and they're flexible enough that you can modify them to fit your specific system.

The only real problem I see with PKGBUILDs is that, at least on Arch, they're community-developed, which means that they can be out-of-date or variable in quality. But that's no worse than the situation if you were to install from source, anyway (which is the alternative), and that's more a byproduct of the way that the Arch community is structured than the PKGBUILD system itself. I'm sure if Red Hat or Canonical suddenly decided overnight that they'd start supporting PKGBUILDs, things would be different (and would have their own problems, likely...)

> Moreover, the ties to shell script just continue to harm OS independent adoption

Without looking, I'm 99% sure that PKGBUILDs are bash, not sh, but I'm also 99% sure that 99% of the PKGBUILDs I've ever installed are sh-compatible. And I don't think that's the worst thing in the world - in this day and age, you have to be able to handle sh for any Unix-like OS, and you have to be ready to implement the appropriate interface for any other OS that you want to be compatible (or to throw compatibility to the wind altogether, like Windows does).

Second, since many PKGBUILDs are really just convenient wrappers around the already-provided application distribution mechanism, I don't see how this is that bad at all.


I don't really follow your logic. There are very good reasons why you wouldn't be able to install the same package in the same way on Linux and OS X. (For starters, homebrew isn't true package manager in that it's not a one-stop shop - it can't even handle critical system updates like apt-get/pacman can!)

I'm arguing that we should have an OS switch in the same way we have an architecture switch. If we assume that our PM can handle application packages as well as system packages then we should also be able to use it to install application packages on any OS which support it. For example, python should be installable on everything from Windows to BSD all using the same "pm install python" command. Packages will be marked as explicitly supporting OSes.

I think PKGBUILDs are the best of both worlds, because they allow system-specific package management to coexist with application-level packaging - two related, but subtly different problems. PKGBUILDs are general-purpose enough that they can do pretty much whatever you need them to do, assuming that they're well-written (which is our assumption when comparing any packaging method), and they're flexible enough that you can modify them to fit your specific system. The only real problem I see with PKGBUILDs is that, at least on Arch, they're community-developed, which means that they can be out-of-date or variable in quality. But that's no worse than the situation if you were to install from source, anyway (which is the alternative), and that's more a byproduct of the way that the Arch community is structured than the PKGBUILD system itself. I'm sure if Red Hat or Canonical suddenly decided overnight that they'd start supporting PKGBUILDs, things would be different (and would have their own problems, likely...)

Agreed.

Without looking, I'm 99% sure that PKGBUILDs are bash, not sh, but I'm also 99% sure that 99% of the PKGBUILDs I've ever installed are sh-compatible. And I don't think that's the worst thing in the world - in this day and age, you have to be able to handle sh for any Unix-like OS, and you have to be ready to implement the appropriate interface for any other OS that you want to be compatible (or to throw compatibility to the wind altogether, like Windows does). Second, since many PKGBUILDs are really just convenient wrappers around the already-provided application distribution mechanism, I don't see how this is that bad at all.

This was my main complaint with PKGBUILDs. A bash/sh dependency hurts on non-Unix OSes. I'd rather Python than sh.

I also admit that, for the most part, I absolutely hate sh and all its derivates and view them all as dirty hacks, so this is probably just innate bias.


> For example, python should be installable on everything from Windows to BSD all using the same "pm install python" command. Packages will be marked as explicitly supporting OSes.

I don't see how that would work. In an abstract sense, I can conceptualize it, but from a practical standpoint, I can't see it ever being implemented well enough to the point where it would be widely adopted (which would be the whole point). If you can think of a way to do it, by all means go for it, but I just don't think it's possible.

> A bash/sh dependency hurts on non-Unix OSes. I'd rather Python than sh. I also admit that, for the most part, I absolutely hate sh and all its derivates and view them all as dirty hacks, so this is probably just innate bias.

I think it's just your bias. At some point, whatever language it's written in would be non-native on most of the OSes, so you'd have to port something one way or the other (Python has a fairly nice port that abstracts away some of the warts, but it's still just serving as a pseudo-agnostic interface). Given that, why not sh? It's already been ported to Windows (I've never used Cygwin, but it seems like it'd be a useful starting point), and virtually every other OS is either Unix-like or supports some native POSIX-like interface for compatibility reasons.

If you're complaining about sh syntax, I'm a bit sympathetic toward that, but it's ingrained enough - and good enough - that I wouldn't go to the effort of replacing it. Also remember that, for shell scripting, you don't want a language as powerful as Python - it needs to be able to do a very small set of simple tasks and do them well.

Also, just think about concurrency. We take pipes for granted, but imagine trying to do that all natively in Python.


Kickstarter sounds like a viable option for bringing more software to Linux if only for the reason that it's attracting a lot of attention lately.

There is a danger of this making Linux a second-priority OS but it pretty much is already so I don't know if this is a huge issue. A team behind a proven software for Windows or OSX could say "Hey look we've made this great tool for Windows/OSX users and made a lot of money on it.. We'll bring it to Linux if we raise X amount of dollars". The Linux users are significant enough and generous enough (as proven by the humble bundles) that it would work financially and would make the Linux ecosystem explode with new variety.

Open source developers on the other hand might find it easier to raise funds and continue to hack away on their projects with a similar solution. The Ardour developer could go on an open-sourced version of Kickstarter and say "Hey guys I've developed this awesome audio software that a lot of people use. Here are the features I'd like to bring to the next version.... I need to raise 100K in order to be able to continue working on this."


This is why I'm excited about Light Table, even though I don't think I'll ever use clojure or javascript much. The author also experiments with kind of humble bundle licensing.


In Europe we don't even know what Kickstarter is all about, unless we google for it...

I just found out about it by reading American blogs.


The guy who gave this talk has a really fun weekly show that I've been addicted to as of late. The Linux Action Show, most recent one here: http://www.jupiterbroadcasting.com/18887/ubuntu-12-04-review...

They do a live show on Sunday and put the recording out the next day. Today's will be on in about 40 minutes from now (1:00P EST)


I'm so glad that I watched the first of these, and I think I will also enjoy the second. Since it required a bit of searching due to the very short mention at the end, the "Vivaldi" KDE tablet can be found at http://makeplaylive.com/ , which cites a "target retail price of €200" for the tablet stack.

Related to this, I have been helping seed the torrents and distribution of a new flavor of Kubuntu called Kubuntu Active, which can be found at http://cdimage.ubuntu.com/kubuntu-active/releases/precise/re... . Like the Vivaldi tablet mentioned, it's based on the KDE Active stack and is thus optimized for touchscreens. I'm going to try this live CD out on my convertible tablet PC, once I can find a CD to burn it to. I was already very surprised to find that six months ago my upgrades magically enabled multitouch, but at the time it seemed like the only multitouch gesture anyone had programmed for was pinch-to-zoom in KDE.

Since it might also be relevant, in the vein of "send money to support businesses which ship Linux," shortly after I bought my current Fujitsu laptop I found a nice company selling Linux laptops called System 76: https://www.system76.com/ . While they don't seem to be selling convertible touchscreens yet, they do make some very pretty and cost-effective laptop options. (Full disclosure: I am plugging them in part because I want them to stay in business, because I want to buy from them whenever I need my next laptop.)


I came expecting the latter would be in response to the first by another person. Nope. It's the same guy taking the opposite perspective on every point in the first.


Someone needs to send Chris a damn tripod!


Can we agree on a standard Linux directory structure first?


How hard is it to hold a camera steady..?


Pretty hard without a tripod. I don't know why Chris didn't bring one.


God bless ya, bryan :p


That video gave me motion sickness, and I couldn't see the slides very well.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: