Hacker News new | past | comments | ask | show | jobs | submit login
Docker, the Linux container runtime: now open-source (docker.io)
403 points by shykes on March 26, 2013 | hide | past | favorite | 76 comments



Hey everyone! We didn't plan on opening Docker for another few weeks, but last week [1] you guys made it clear you'd rather see it early, warts and all. So here it is! Pull requests are welcome!

[1] https://news.ycombinator.com/item?id=5408002


I feel this is against HN etiquette but I need to thank you. Be encouraged and go far. This looks awesome. I wish you many a decent pull-request.


It's never against the HN ethos to show someone you appreciate their work.


I think he was more referring to the fact his comment lacked substance other than what amounted to an "Awesome."

Honestly, I think it's fine. Sometimes an up arrow just isn't expressive enough.


No, I understood the context of his message, but allow me to articulate further: "While it may be against the norm around these parts for a simple post of agreement or gratitude, it doesn't harm the community, on the other hand, it lets the OP know that posts of this kind are good and to do more of them in the future."

However, I agree, sometimes it's nice to have something positive to read in the midst of constructive criticism.


I've loved messing around with docker so far.

Do you guys have any plans to offer an image library? Pre-built versions of common open source tools like official .amis or the vmware virtual appliance directory?

I understand it doesn't exactly match your primary goal, but i think there is a large untapped demand for lightweight appliances for use at home or inside the firewall that juju really isn't set up to satisfy. I love the similar feature on my synology nas, although those are really just packages. There is often a large gulf between knowing how to set something up and being willing to do it.


There has been a lot of demand for that, yes. I think it makes a lot of sense and would make docker much more useful out of the box.

You can already publish your own images with 'docker push'!


The more I look into freebsd, the more I realise how awesome it is. This same thing has been in freebsd for ages: jails.

(maybe not the fancy deployment, but the container technology)

That, together with ZFS turns freebsd into one hell of a server OS. ZFS seriously needs to come to linux. And don't say license problem, "there is nothing in either license that prevents distributing it in the form of a binary module or in the form of source code."[1]

[1] http://zfsonlinux.org/faq.html#WhatAboutTheLicensingIssue


I have been looking more at smartos (and other openindiana derivatives). Zones + KVM + ZFS. Oh my!


You forgot DTrace!


ack! I sure did!

And you are quite right. DTrace is super.


smartos does seem to be missing an X server you can run in the global zone on the physical machine's physical video port(s), though, which makes it pretty useless for a home computer (which otherwise would be a great place to start playing with it before deploying it elsewhere).


SmartOS is definitely not intended to be used as a desktop system, but it can work great as a home server if you have some spare hardware.

OpenIndiana is another Illumos distro that can be used as a desktop. You get all of the core benefits of Illumos (zones, zfs, dtrace). The big downside is that you don't get the nice tools for managing zones/vms that SmartOS provides (vmadm/imgadm).

FreeBSD has ported a lot of the best features from Illumos so it could also work for you.

Just remember that an OS is only worth running if it has DTrace.


Several people i know run ZFS very successfully on Funtoo Linux. Funtoo being a Gentoo-Fork lead by Daniel Robbins himself.

--- http://www.funtoo.org/ZFS_Fun

--- http://www.funtoo.org/wiki/ZFS_Install_Guide

I personally haven't used it so far, but i hear good things :)

I agree with your statement though: ZFS + Jails is a killer pro-BSD argument when it comes to servers.


Even if it won't make it in any Linux distro officially, there is nothing stopping you from using it.


The push/pull mechanism for images seems like it adds a rather large centralised component to an otherwise very standalone piece of kit.

Is this "docker registry server" the final thought on how people will "ship" their containers? ... I'd much rather have the docker cli be able to be able to be configured to use some private repository of images. Maybe I missed something.


Our approach is that you need both:

* A free, public mirror (comparable to pypi or the ubuntu mirrors) makes docker instantly more useful. You're 1 command away from sharing a ready-to-use image with the whole world, or trying someone else's image.

* Docker definitely also needs a mechanism for moving images in a peer-to-peer fashion as you see fit - ala git pull. This is actually more work to get right, but we are working on it (and pull requests are always welcome :). Any docker will be able to target any other docker for push/pull.

Hope this helps.


Thanks, it does.

I can kind of see your point for the "instant gratification" of a public repository... but I'd love to see docker become some kind of standard *NIX type tool for packing up blocks of software ready to move them around, and you don't see many simple tools backed to some third-party infrastructure. (don't get me wrong - I'm not totally against it.. I just want this to work, and work forever without being tied to anything)

Also as lover of Go's idea to do away with a centralized repo for pkg management, it would be nice to see a similar approach taken for the handling of Standard Containers.

I'm very excited by all this. It feels like such a step in the right direction for all kinds of deployment problems.


So, you might not read this in a while... But after thinking about this some more, I think Go's decentralized approach to package management is actually a great model. So, expect something very similar in the future :)

Always happy to discuss further on #docker@freenode!


I don't know much about git, but wouldn't it be possible to just build the push/pull mechanism on top of git? As git works, why not just use it already?


That is the first thing we tried [1]... It turns out it's not super practical. In theory a root filesystem is just another set of files.. but git doesn't support posix metadata, for example. You have to wrap or hook it to preserve permissions, uid, gid, as well as unix sockets, device files etc. Half-way through your hand-made implementation of a file-metadata-serialization tool... You realize, yeah, that's basically what tar is.

Another problem is that in our particular use case (tracking changes to a root filesystem) the space and bandwitdh savings of binary diffs are not worth the overhead in execution time. Downloading an image is as easy as extracting a tar stream. 'docker run' on a vanilla ubuntu image clocks at 60ms. 'docker commit' is as simple as tar-ing the rw/ layer created by aufs. How long would a git checkout of that same image be? What about committing? Not worth it.

[1] https://github.com/dotcloud/cloudlets


Ye, makes sense. thanks for clearing that up. :)


If you have a cool screencast showing off what you made using docker, add the links here. We will take the best ones and put them on the docker.io website.


I'm glad to see work in the area of making linux containers more accessible. I recently stumbled upon openruko[1], an open source Heroku clone, and from there discovered linux containers and lxc[2]. It takes a bit of configuration to set up useful containers, though. I think the ideas behind Heroku and The Twelve-Factor App[3] are good, and containers are an important building block. I'm excited to see (and I'd like to see more) tools like Docker that aid in robust and streamlined container-based deployments in-house.

[1] https://github.com/openruko [2] http://lxc.sourceforge.net/ [3] http://www.12factor.net/


I've been making heavy use of OpenVZ. The documentation isn't the best, but it's often used for VPS slices.


Awesome product! I've squinted at lxc for a while now. This looks great!

But: What about this security issue regarding lxc? Does docker takes measures to prevent this?

http://blog.bofh.it/debian/id_413

The issue itself appears to be fixed if you use Linux 3.8 with compiled in namespace support (that breaks NFS and other filesystems at the moment)


Did you guys decide to launch without the "push/pull images" mechanism? I can't find it on the site in 3 clicks or less, so I'm assuming it's not finished yet.

(For anyone who's not the submitter, when this was mentioned about a week ago, @shykes was hinting that there would be some sort of community images repository made before launch.)


It is there, it made it in very recently, so the docs aren't great.. but check out the CLI docs for some more information on push/pull

http://docker.io/documentation/commandline/cli.html

We welcome all pull requests and issues, so if you notice something missing in the docs, feel free to let us know, or add :)


It is actually finished, just poorly documented.

    $ docker pull base
    $ CONTAINER=$(docker run -d base apt-get install -y curl)
    $ docker commit -m "Installed curl" $CONTAINER yebyen/betterbase
    $ docker push yebyen/betterbase
It's buggy but it should work. Let me know if you actually do that, I will try to pull it :)


I did actually try this just now, and I got

  <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
  <title>500 Internal Server Error</title>
  <h1>Internal Server Error</h1>
  <p>The server encountered an internal error and was unable to complete your request.  Either the server is overloaded or there is an error in the application.</p>
It is possible I guess that the server is overloaded. I'd like to think that if someone registered my name, I'd get a better error. It does say I created an account, and it doesn't prompt me for credentials on the second try.

The account creation does not echo Username or Email, which is unusual for an account creation, and I think that Password should echo at least the \n even if the password is entered blind as customary.


Thanks for pointing that out, the 500 error should be fixed!


OK, cool :) I'll try to do something better than just installing curl!

You're right, the docs are indeed poor.


Also, make sure to try 'docker history'. It preserves the graph of which command created which image....

For example, this is the image I use to automatically update docker's binary downloads:

   $ docker pull shykes/dockerbuilder  # Be patient, it's a big one

   $ docker history shykes/dockerbuilder

   $ docker run -i -t shykes/dockerbuilder /usr/local/bin/dockerbuilder $REVISION $S3_ID $S3_KEY


Does shykes/dockerbuilder in this command map transparently to your github 'dockerbuilder'? Do the images come from github too, or the docker community server?

I did not get all of this from your first reply, I guess I should have been wondering, where the images are hosted, what's stopping other people from publishing images as yebyen, etc...


no, it maps to the docker registry which is different then github.

The docker registry is a community server where anyone can add their own images, they just need to login first (docker login), and then push/pull their images they want to share.

nothing is stopping you, so go ahead and try it today.


This is really kind of incredible. If I understand it correctly, are containers : virtual machines :: greenthreads : threads?


In terms of spawning time and overhead, yes. In terms of parallelism, containers are like threads ;)


Why is it aufs instead of unionfs? Not that I have a preference for either but I am curious :)


There are issues with unionfs in it's current form. Because of those issues, aufs is currently more stable.

Docker isn't tied to aufs, if unionfs or any other union file system comes along and is better, we are more then happy to switch to it.


Can you give some details of those issues?

I'm using unionfs and while I'm not taxing it very much I haven't seen any issues so far. I'd love to know what to look out for. Is it easy to setup aufs?


Cool conversation underway on the IRC channel: #docker on freenode.


+1 for #docker@freenode, a very helpful resource.

We've been playing around with docker internally at SendHub for the past few weeks, and so far it is looks promising.

I'm /really/ looking forward to seeing and hearing about cool applications people find for docker containers!


As said before, containerization is the new virtualization. This is History in the making people.


Except it already was done in the mainframes and commercial UNIXes, nothing new.


There is a paradigm shift that happens when a great idea becomes open and easily redistributable outside the confines of the "cathedral".


The cathedral is what makes money for the people to contribute to the bazaar in their free time.


And the cathedrals often benefit from the bazaar's innovations as well (e.g.: Google's use of Linux in both their datacenters as well as in Android).

It's not an either-or, it's a symbiosis.


It's also one of the few remaining reasons that people still buy Solaris or AIX boxes.


Well, HP-UX 10 already had quite a good container solution back in 1999, if memory does not fail me.


Kind of like FreeBSD has had with jails from around 2000...


Nobody is claiming that jail/containers are something new. Mass marketization of them is, though. VMs existed way before virtualization too, and yet...


Very excited about the possibilities and convenience that Docker opens up. Will be taking it for a spin soon.

Few questions regarding performance compared to running the processes directly on Linux (I guess this applies to LXCs in general - forgive me if these are due to a lack of knowledge about how LXCs work in general):

- How much extra memory and CPU does each Docker process take? - Is there any performance hit with respect to CPU, I/O or memory? - Are there any benchmarks from testing available?

Again, kudos to all the people at dotCloud behind Docker and extra props for open sourcing!!


Any plans on supporting network topology / network speed simulation like Mininet does? (i.e. the ability to set network latency, throughput and jitter using netem). Would be useful for testing purposes.


Yes! As long as it doesn't break the guarantees of repeatability, we want to support as many networking topologies as possible. We're still looking for the best API to expose that.

I know https://github.com/synack has been experimenting with docker + openvswitch, which opens many exciting possibilities.


How does this compare to OpenVZ? I'm assuming this is much lighter than OpenVZ or am I wrong there?


OpenVZ gives you a container as they call it, which looks like its very own Linux OS. You can login to it, install packages with the package manager, etc. All processes you run, e.g. Apache/PHP, plus MySQL , run in this VM. PHP and MySQL would communicate over a Unix local socket, same as on non-virtualized Linux OS.

Docker appears to run each process, in isolation. So you would have 2 isolated processes (Apache/PHP, plus MySQL) which can only talk to each other over a local network pipe (which should be quite fast with minimal overhead).

(this is my understanding).


The underlying technology(LXC) is quite similar to OpenVZ, the main difference being that LXC comes standard with the Linux kernel while OpenVZ comes as a module. They are both lightweight virtualization technologies that create separate process and memory spaces inside the same kernel. I think OpenVZ is also much older then LXC.


For a similar but non LXC version of the same idea check out http://www.pgbovine.net/cde.html

Basically it records all the dependencies an application touches while running and creates an environment you can tar up and use on basically any system of the same arch. It even works cross distro. Wouldn't really work for 'the cloud' as you don't get the same security and isolation as LXC and you have to make sure all the execution paths are triggered but for a lot of apps it works great and it's ridiculously easy to use with no setup.


It seems I'm unable to get the tarball by following the instructions.

    $ wget http://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-master.tgz
    --2013-03-26 15:43:39--  http://get.docker.io/builds/Linux/i686/docker-master.tgz
    Resolving get.docker.io... 205.251.242.131
    Connecting to get.docker.io|205.251.242.131|:80... connected.
    HTTP request sent, awaiting response... 404 Not Found
    2013-03-26 15:43:40 ERROR 404: Not Found.


I remember reading that Docker and LXC would only work on 64 bit platforms. If your uname -m was x86_64, the file would be there.

Also read earlier today that Go's GC on i686 (or really any 32 bit platforms?) is not suitable for a daemon like docker, it leaks. This may be fixed in tip. I have no idea.


This is correct, we don't currently support 32-bit hosts, and don't publish 686 builds.

We do plan to add cross-arch support in the future, though - it's just a lot of work to get right.


Then the uname -m part from http://docker.io/gettingstarted.html is a lie!

I'll try it on Windows with vagrant then.


Well, at least you found out right away your architecture wasn't supported :)


Wow, great. Any plans to provide RHEL/CentOS instructions?


Yes, definitely. There's an open issue here: https://github.com/dotcloud/docker/issues/172


Sweet. I will be reviewing this thoroughly as I have been looking forward to it since the PyCon demo!


Just a suggestion regarding the FAQ: What is a Standard Container?

Wouldn't it be better to simply describe it as something similar to a vm snapshot/export. If I export say a virtualbox image I can move it around and run it on other vm players.

I think the shipping container analogy is simply bad:)


This is the one written in Go right?


Correct, it is written in Go. Checkout the repo to see all the wonderfulness. https://github.com/dotcloud/docker


Ah, this is nice to know.

Even though I do have some issues with Go, it is nice to see safer languages being used for this type of work.

Good work.


It is written in Go. As opposed to (?)


There are similar tools written in other languages, python, c, perl, etc.


I was hoping someone would advocate for a particular one. (tool, not language)

Maybe not the right article for that.


I would argue for Docker, but i'm biased :)


Maybe they could at least get linked?


Reminds me a lot of hpc tools like `mpirun`, 'qsub`




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: