Hacker News new | past | comments | ask | show | jobs | submit login
Canonical bringing Snappy Ubuntu to phone and desktop (rainveiltech.com)
117 points by Beacon11 on Nov 4, 2015 | hide | past | favorite | 44 comments



We have evaluated Snappy a lot. In Snappy you can only share executables as dependencies, not librarires. So, you can use curl as a program, however if you want to use libcurl in your application, you have to include the library in package. When this list goes large, you have to keep track of the state of the every dependency you include. Bugfixes, security patches.. However, in regular package managers, you can also depend on libraries. And this solves a lot of headache, since those packages are shared globally and updated by system. I have almost never seen a backwards incompability caused by an update (some program needs to update liba, but some other program cannot use that upper version), it happens very rarely.

If you insist of using a specific library version, there is nothing stopping you from including it in your package , like snappy's.

Lastly, when you include libc a million times, or static compile your binaries, the size goes up and storage/bandwith also becomes an issue.

I agree that Snappy solves some problems, but introduces new serious ones as well.


The only problem Snappy creates is one of (initial) size. When every package is a .snap that means you're going to have many duplicated dependencies all over the place. It also means that any security update that exists in a common dependency will require updating a zillion Snappy packages all at once.

Fortunately, the creators of Snappy thought of that and included an atomic/delta update system. So when you have 100 packages that need to be updated because of a common dependency you only need to download 100 binary diffs instead of 100 full-size snap packages.

Unfortunately packaging appears to be a paradox of sorts: Either your app lives in an environment of shared libraries where you have no control or your app lives inside a container of some sort that bundles copies of all those libraries. You can't have both.


Fortunately, the creators of Snappy thought of that and included an atomic/delta update system. So when you have 100 packages that need to be updated because of a common dependency you only need to download 100 binary diffs instead of 100 full-size snap packages.

That's not really a solution though, just a mitigation.

Updating a single package for security fixes or other changes instead of a 1,000 is still vastly more efficient and faster.

Unfortunately packaging appears to be a paradox of sorts: Either your app lives in an environment of shared libraries where you have no control or your app lives inside a container of some sort that bundles copies of all those libraries. You can't have both.

That's because the solution alone can't be through packaging; it requires changes at every level of the stack.

The system needs to provide robust backwards-compatibility and sane packaging.

Applications need to use interfaces responsibly, pick dependencies carefully, and use a sane build environment.

When developers attempt to solve everything with the blunt hammer of packaging, they end up with a lot of "bent nails".


> Updating a single package for security fixes or other changes instead of a 1,000 is still vastly more efficient and faster

It would be interesting to see the statistics - how many packages in average really use the same libraries.


Almost everything uses libc, so that's an easy one. But after that, libraries tend to be domain specific.

For example, all C++ programs will probably be linked against libstdc++. Programs compiled with gcc have a good chance of being linked against libgcc_s.

All desktop applications are probably going to be linked against libgtk or libqt, libx11, and various other X libraries. In short, there's a significant, obvious benefit.

As I mentioned somewhere else though, if a library is only used by a single application and that library and application are built together, then yes, you might as well statically-link it unless there's some other requirement.


> When this list goes large, you have to keep track of the state of the every dependency you include. Bugfixes, security patches.. However, in regular package managers, you can also depend on libraries. And this solves a lot of headache, since those packages are shared globally and updated by system.

And this is the back-and-forth problem that nobody has the answer for. Do we make this one app fail, because the maintainer for it isn't updating for library updates? or do we cause system-wide insecurity because we need that one app to work?

I'm in favor of the former, but I think both sides have good arguments.


Having been burned repeatedly by the primary problem you mention, I lean more towards the second case simply because security depends on the application involved and how it's used.

An insecure app may have mitigations available that don't reduce its usefulness. A broken app is just broken and useless.


Slightly off topic, but I'm really frustrated with the state of software packaging these days. Every solution tries to solve the same problems (reliable dependancy resolution, reliable installation and removal) but they all try and solve it in different incompatible and flawed ways.

Consider the following: I need a reliable way to deploy my app to clients.

The popular options are: 1) Source code + Configuration system (automake for example) 2) Binaries built for popular platforms (debs, rpms, exes) 3) Docker Image

Source code means that the client needs the whole build tool chain, which might be quite + computationally expensive (especially on mobile)

packages have to be built and maintained, and don't fully solve the dependancy issues (IE I might expect Ubuntu 14.04 to have a specific version of libc, but the user might have upgraded it, and I can't install my own incompatible version)

and of course docker, ship my clients a operating system and require them to have and know how to configure the runtime so they can run my 1mb compiled binary. Also doesn't work on embedded devices.

Ideally, in the future most distributions will move to functional package managers, and at least for mobile will have binaries for every possible version of dependencies available, but at the moment that's just a pipe dream and things like snappy don't get us any closer to that


> packages [...] don't fully solve the dependancy issues (IE I might expect Ubuntu 14.04 to have a specific version of libc, but the user might have upgraded it, and I can't install my own incompatible version)

I feel that once a user circumvents the package system, it is unreasonable for that user to expect that the package system will continue to work correctly. In that case, the user has taken over some of the responsibilies of the package system, and software vendors should feel no obligation to ensure compatibility with such changes.


I don't disagree, and of course swapping out libc might be an extreme example, but my point still stands - ideally I shouldn't have to rely on any specific dependencies being in place to deploy my software.


Some amount of dependencies is a reasonable expectation; for example, requiring a certain minimum version of libc is entirely reasonable. Or, requiring a certain minimum version of the kernel.

However, if newer versions of libc or the kernel are not backwards-compatible, that's a failure in the system itself (not the packaging). (Linus, as an example, has a strict rule about not breaking userland applications due to kernel changes.)

It's impractical (and undesirable from a security and other standpoint) to statically link all dependencies into every binary simply to "simplify" packaging. Not only would that be a waste of storage space and memory, but it prevents fixes (security or otherwise) applied to the libraries used by the application from taking effect without patching everything that statically-linked it.

Even if you ignore the storage space issue with some amount of hand-waving, larger binaries means greater I/O and network bandwidth requirements, which means it takes longer to update systems or create images for deployment. That in turn leads to increased downtime or increased sustaining costs, which effectively leads to reduced availability of the system.

This can be mitigated somewhat by using difference-only-style update mechanisms, but it only partially mitigates the issues created by statically-linked binaries; it does not eliminate them.

Dynamically-linking dependencies also ensures that future performance and security updates don't require a rebuild of existing applications; you can simply update the dependency, and if done properly, every application linked to it will automatically receive the benefits.

Obviously, if a given dependency is only ever used by a single component, and the component is always rebuilt when the dependencies are, it doesn't matter if they're statically-linked. I'm referring only to the system dependencies common to many system components (such as libc).

In the end, a carefully chosen set of dependencies that provide robust backwards-compatibility provides the least downtime and the most compelling administrative experience.


I agree that functional package management is the way forward. Having a system that contains the precise details of how to go from source code to binary, in a completely automated fashion, for the entire set of packages all the way down to the bootstrap binaries is a critical foundation to build upon.


Seems to summarize down to a love/hate relationship with shared libraries. Most of the flaws are best explained as the devs, admins, users, security team, and distro maintainers sometimes having different positions on the spectrum of love/hate for shared libraries.

Also don't forget architecture... you want that binary in 32 bit i386, 64 bit amd64, or arm?


In a nutshell yes. The binary argument is true as well, however many developers don't want to make their sources avaialbe which means other than going source -> some obfuscated IR -> native, thre is no way to allow the end user to run your app on any arbitrary platform.


There is. It's called fat binaries. Ryan C. Gordon was working on an ELF extension for it, but ended up quitting due to getting shit on by kernel hackers and common users alike.


I feel your pain. I'm trying to figure out how to get some code with some unique dependencies shipping to a wide range of users (both in skills and platform), and I don't even know of a good (multiplatform) solution that doesn't look like "bake it all into a static zip and unzip into /opt" or "run a docker container".

I need to think about that a bit more instead of solving the actual product problem at hand. :-/


There are solutions to this problem (e. g. ZeroInstall [1]), but the distributions don't support them, since they want to keep you vendor locked-in. Guess why Ubuntu is developing Snappy and not using xdg-app [2]? Also purely functional package managers exist [3], distributions just ignore them. It's hard to use something not based on deb or rpm these days.

[1] http://0install.net [2] https://wiki.gnome.org/Projects/SandboxedApps [3] https://nixos.org/nix/


I find it hilarious how you make the implication that Snappy is some deliberate NIH of xdg-app, when xdg-app itself is GNOME/Red Hat's NIH of Nix.

If anything, we should be lauding Canonical for keeping to themselves in their corner instead of seeking greater validity across the ecosystem with half-baked ideas.


I think Docker is meant for dev ops, not for casual users.


It's supposed to be for devops, but in practice developers use docker as a way to solve dependency management on the server more so than to seperate instances of apps from each other. Especially with "microservices" it makes more sense to have docker available on the server and allow each team to upload a image with all the deps already baked in.


Yup. The tradeoff is that in operations we then need to work around all of the inherit disadvantages of running docker containers (network slowness, disk slowdown, logging, etc) to keep their systems performant.


You will be fine if the user installs a newer libc. Binaries and .debs linked against libc from 10 years ago work and install fine.


You mean sort of like npm?


npm is an example of a bad solution to dep management. So I want you to run my 6kb node.js script you need to:

1) Build obtain node for your platform (and now you've hit the exact problem I'm talking about) 2) install npm (which I can't really define as a dep, you're just expected to have it) 3) install all the deps I've defined and hope they all work on the version / configuration of node you have (execjs only works on node 0.10 but you have v4? Too bad! You didn't apply the increased memory patch before building node? Too bad!)

The ideal here to reiterate is given nothing but your installed OS, I should be able to give you a file that will go from nothing to a running app, and from the running app to a clean system if you decide to remove it, without you having to have any prerequisites (no node, no chef, no docker, etc)


I don't think you can really complain about having to have node.js installed in order to run a node.js script.

If you want self-contained binaries, learn Go. If you want lightweight scripts that run effortlessly on all unix systems, learn bash scripting. It sort of sounds to me like you're complaining about not being able to shoehorn your chosen language into all possible scenarios.


I think that's a very reasonable thing to complain about in the context of dependency management. The runtime (node.js) and it's compile time + run time options are actually a dependancy of your application.

As an end user, I shouldn't be expected to know how to configure x dependancy for my system to run y app and as a software developer I shouldn't have to build / package apps to match the client's system configuration.

The sticking point is (back to my point) that the vast majority of distributions don't give the app a way to manage the deps and get specific deps in required, the best you can do is list x as a dep in your package and hope that it won't break your app or the system in some way.


Agreed. Have two dependencies which have the same underlying dependency? Install and compile that dependency twice (with two different versions, by the way)[1]. Want to use grunt? Need to set that up as a dependency for your project, but only for development. A problem while installing? Have fun with the debug output detailing the fetching a building of every dependency, and every dependency's dependency, and every dependency's dependency's dependency... etc.

[1] I can't believe how many distinct copies and versions of the 'shell' module are downloaded and compiled when pulling in a seemingly simple set of modules.


I wouldn't use npm as an example of a stellar package manager.


Not using it as an example of a 'stellar' package manager. Just one where the dependencies can be defined with version. I was under the impression that apt-get will just use the most recent version.


apt-get/dpkg do have the ability to specify package versions, including specifying versions equal to, greater than, or less than. It also has the ability to specify build time and run time dependencies separately.


You can specify a version. The real trouble is that you generally cannot have multiple versions of the same package installed on the same system. Thus, if multiple packages specify different versions as dependencies, you're hooped.


So that is something npm does, AFAIK. Isn't it even possible to have multiple different versions of a sub package in the dependency tree of one project?


The downvotes are undeserved. The solution is npm on the OS level - you specify the version of the dependency you require, and each version is installed across the OS only once.


I wish they'd just focus on making Ubuntu work. I mean, look at one of the flagship Ubuntu laptops, by Dell:

http://en.community.dell.com/techcenter/os-applications/f/46...

I'll still buy the things, because it's important to me to put my money where my mouth is, but it looks like it could use some significant investment to make it a better product.


Yeah, ubuntu has always spread itself too thin, while neglecting bugfixing on its core products.


I had a feeling this would happen. Containerisation as a trend has just as much to offer desktop OSes as it does for cloud clusters.


Is this really containerization as we know it, like Docker? To me, Android and alike just have a good separation between processes.

This is, of course, the core of linux containers. But I don't need to deploy a system image to run an Android application.


Snappy packages are definitely containers in the sense that they 'contain' an application along with all its (runtime library) dependencies. Think of containers in order of size/scope like this:

1. Physical appliances.

2. Virtual machine images.

3. Docker images.

4. Snappy packages (and similar)

At the top we have an actual physical piece of vendor-chosen hardware that "contains" a vendor-controlled application and execution environment. The end user has almost no control.

Then comes virtual machine images which "contain" a vendor-controlled application inside a vendor-chosen operating system.

Up next we have Docker images which "contain" a vendor-chosen execution environment and application but run inside the end user's chosen operating system (in a special sandbox).

Lastly we have Snappy packages which "contain" an application and it's build/run time dependencies but run just like a regular application with user-controlled restrictions (via AppArmor).


As docker still doesn't really provide security (by design) - the main thing that docker brings with it is an awareness of dependencies. Now anyone can whip up a microservice that's ready to run in chroot, with minimal access to the rest of the system. Looks like Snappy packages does much the same thing.

Try to hand-craft a chroot for Firefox, and see how far you get (at least without cheating and doing something like xhost+).

Even some daemons can be a bit of work do chroot - eg: a working webserver/webapp-server with php that needs to resize images (pictures).

If we can get more people to target docker/snappy (and we do) we get more applications and daemons that are easy(ish) to run in a chroot (or jail).


Can you point to the security tradeoff/design decisions regarding Docker? I would really appreciate it.


First there was OpenVZ, then there was LXC - which essentially allow running an isolated instance of the kernel (very much like BSD jails). LXC can be locked down with capabilities, kernel namespaces, cgroups, and allow for quite fine-grained isolation. But the focus of Docker is more on use ability: the equivalent of asking a process to sit in the corner and stare at the wall. It's an easy way to get a lot of unruly children to stop interfering with each other, if you have many corners. But they're kind-a-sorta still in the same room. Not chained to their desks, not blindfolded.

But generally, docker doesn't do all that much locking down - if you have the libc/code to do something in a docker container, you can in general do it. The layered fs is mostly like a chroot - it's safe as long as the kernel is safe, and then it's completely unsafe - if the kernel isn't safe. The flip side of this is that if you write a program/daemon that works on regular gnu/linux, it'll work (as long as you provide the needed library code) in a docker container. And docker helps focus dependencies, in particular data dependencies (the other tricky part of getting something to run in a chroot - where's /etc/shadow? Where's /etc/group etc)).

I'm sure we'll see Docker move in a stricter direction, as more people are wrapping their minds around isolation -- apart from the running under same kernel, LXC can do more. And we have things like rkt with kvm backend that takes all the hard work put into containers and magically(ish) transplants that to work with minimal vms.

Docker did/do(?) one thing: it didn't allow you to run Docker in docker (you need/needed to run a "privileged" container for that).

I'm not sure if that was as coherent as what you hoped for, but that's sort of what I tried to imply wrt design tradeoff.


Has anyone evaluated this vs NixOS (http://nixos.org/) ?

They seem to have some overlap and I've heard good things about NixOS, I like the theory behind it, but I find its Haskell-based configuration language hard to understand.



Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: