Hey everyone! We didn't plan on opening Docker for another few weeks, but last week [1] you guys made it clear you'd rather see it early, warts and all. So here it is! Pull requests are welcome!
No, I understood the context of his message, but allow me to articulate further: "While it may be against the norm around these parts for a simple post of agreement or gratitude, it doesn't harm the community, on the other hand, it lets the OP know that posts of this kind are good and to do more of them in the future."
However, I agree, sometimes it's nice to have something positive to read in the midst of constructive criticism.
Do you guys have any plans to offer an image library? Pre-built versions of common open source tools like official .amis or the vmware virtual appliance directory?
I understand it doesn't exactly match your primary goal, but i think there is a large untapped demand for lightweight appliances for use at home or inside the firewall that juju really isn't set up to satisfy. I love the similar feature on my synology nas, although those are really just packages. There is often a large gulf between knowing how to set something up and being willing to do it.
The more I look into freebsd, the more I realise how awesome it is. This same thing has been in freebsd for ages: jails.
(maybe not the fancy deployment, but the container technology)
That, together with ZFS turns freebsd into one hell of a server OS. ZFS seriously needs to come to linux. And don't say license problem, "there is nothing in either license that prevents distributing it in the form of a binary module or in the form of source code."[1]
smartos does seem to be missing an X server you can run in the global zone on the physical machine's physical video port(s), though, which makes it pretty useless for a home computer (which otherwise would be a great place to start playing with it before deploying it elsewhere).
SmartOS is definitely not intended to be used as a desktop system, but it can work great as a home server if you have some spare hardware.
OpenIndiana is another Illumos distro that can be used as a desktop. You get all of the core benefits of Illumos (zones, zfs, dtrace). The big downside is that you don't get the nice tools for managing zones/vms that SmartOS provides (vmadm/imgadm).
FreeBSD has ported a lot of the best features from Illumos so it could also work for you.
Just remember that an OS is only worth running if it has DTrace.
The push/pull mechanism for images seems like it adds a rather large centralised component to an otherwise very standalone piece of kit.
Is this "docker registry server" the final thought on how people will "ship" their containers? ... I'd much rather have the docker cli be able to be able to be configured to use some private repository of images. Maybe I missed something.
* A free, public mirror (comparable to pypi or the ubuntu mirrors) makes docker instantly more useful. You're 1 command away from sharing a ready-to-use image with the whole world, or trying someone else's image.
* Docker definitely also needs a mechanism for moving images in a peer-to-peer fashion as you see fit - ala git pull. This is actually more work to get right, but we are working on it (and pull requests are always welcome :). Any docker will be able to target any other docker for push/pull.
I can kind of see your point for the "instant gratification" of a public repository... but I'd love to see docker become some kind of standard *NIX type tool for packing up blocks of software ready to move them around, and you don't see many simple tools backed to some third-party infrastructure. (don't get me wrong - I'm not totally against it.. I just want this to work, and work forever without being tied to anything)
Also as lover of Go's idea to do away with a centralized repo for pkg management, it would be nice to see a similar approach taken for the handling of Standard Containers.
I'm very excited by all this. It feels like such a step in the right direction for all kinds of deployment problems.
So, you might not read this in a while... But after thinking about this some more, I think Go's decentralized approach to package management is actually a great model. So, expect something very similar in the future :)
Always happy to discuss further on #docker@freenode!
I don't know much about git, but wouldn't it be possible to just build the push/pull mechanism on top of git? As git works, why not just use it already?
That is the first thing we tried [1]... It turns out it's not super practical. In theory a root filesystem is just another set of files.. but git doesn't support posix metadata, for example. You have to wrap or hook it to preserve permissions, uid, gid, as well as unix sockets, device files etc. Half-way through your hand-made implementation of a file-metadata-serialization tool... You realize, yeah, that's basically what tar is.
Another problem is that in our particular use case (tracking changes to a root filesystem) the space and bandwitdh savings of binary diffs are not worth the overhead in execution time. Downloading an image is as easy as extracting a tar stream. 'docker run' on a vanilla ubuntu image clocks at 60ms. 'docker commit' is as simple as tar-ing the rw/ layer created by aufs. How long would a git checkout of that same image be? What about committing? Not worth it.
If you have a cool screencast showing off what you made using docker, add the links here. We will take the best ones and put them on the docker.io website.
I'm glad to see work in the area of making linux containers more accessible. I recently stumbled upon openruko[1], an open source Heroku clone, and from there discovered linux containers and lxc[2]. It takes a bit of configuration to set up useful containers, though. I think the ideas behind Heroku and The Twelve-Factor App[3] are good, and containers are an important building block. I'm excited to see (and I'd like to see more) tools like Docker that aid in robust and streamlined container-based deployments in-house.
Did you guys decide to launch without the "push/pull images" mechanism? I can't find it on the site in 3 clicks or less, so I'm assuming it's not finished yet.
(For anyone who's not the submitter, when this was mentioned about a week ago, @shykes was hinting that there would be some sort of community images repository made before launch.)
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
It is possible I guess that the server is overloaded. I'd like to think that if someone registered my name, I'd get a better error. It does say I created an account, and it doesn't prompt me for credentials on the second try.
The account creation does not echo Username or Email, which is unusual for an account creation, and I think that Password should echo at least the \n even if the password is entered blind as customary.
Also, make sure to try 'docker history'. It preserves the graph of which command created which image....
For example, this is the image I use to automatically update docker's binary downloads:
$ docker pull shykes/dockerbuilder # Be patient, it's a big one
$ docker history shykes/dockerbuilder
$ docker run -i -t shykes/dockerbuilder /usr/local/bin/dockerbuilder $REVISION $S3_ID $S3_KEY
Does shykes/dockerbuilder in this command map transparently to your github 'dockerbuilder'? Do the images come from github too, or the docker community server?
I did not get all of this from your first reply, I guess I should have been wondering, where the images are hosted, what's stopping other people from publishing images as yebyen, etc...
no, it maps to the docker registry which is different then github.
The docker registry is a community server where anyone can add their own images, they just need to login first (docker login), and then push/pull their images they want to share.
nothing is stopping you, so go ahead and try it today.
I'm using unionfs and while I'm not taxing it very much I haven't seen any issues so far. I'd love to know what to look out for. Is it easy to setup aufs?
Nobody is claiming that jail/containers are something new. Mass marketization of them is, though. VMs existed way before virtualization too, and yet...
Very excited about the possibilities and convenience that Docker opens up. Will be taking it for a spin soon.
Few questions regarding performance compared to running the processes directly on Linux (I guess this applies to LXCs in general - forgive me if these are due to a lack of knowledge about how LXCs work in general):
- How much extra memory and CPU does each Docker process take?
- Is there any performance hit with respect to CPU, I/O or memory?
- Are there any benchmarks from testing available?
Again, kudos to all the people at dotCloud behind Docker and extra props for open sourcing!!
Any plans on supporting network topology / network speed simulation like Mininet does? (i.e. the ability to set network latency, throughput and jitter using netem).
Would be useful for testing purposes.
Yes! As long as it doesn't break the guarantees of repeatability, we want to support as many networking topologies as possible. We're still looking for the best API to expose that.
I know https://github.com/synack has been experimenting with docker + openvswitch, which opens many exciting possibilities.
OpenVZ gives you a container as they call it, which looks like its very own Linux OS. You can login to it, install packages with the package manager, etc. All processes you run, e.g. Apache/PHP, plus MySQL , run in this VM. PHP and MySQL would communicate over a Unix local socket, same as on non-virtualized Linux OS.
Docker appears to run each process, in isolation. So you would have 2 isolated processes (Apache/PHP, plus MySQL) which can only talk to each other over a local network pipe (which should be quite fast with minimal overhead).
The underlying technology(LXC) is quite similar to OpenVZ, the main difference being that LXC comes standard with the Linux kernel while OpenVZ comes as a module. They are both lightweight virtualization technologies that create separate process and memory spaces inside the same kernel. I think OpenVZ is also much older then LXC.
Basically it records all the dependencies an application touches while running and creates an environment you can tar up and use on basically any system of the same arch. It even works cross distro. Wouldn't really work for 'the cloud' as you don't get the same security and isolation as LXC and you have to make sure all the execution paths are triggered but for a lot of apps it works great and it's ridiculously easy to use with no setup.
I remember reading that Docker and LXC would only work on 64 bit platforms. If your uname -m was x86_64, the file would be there.
Also read earlier today that Go's GC on i686 (or really any 32 bit platforms?) is not suitable for a daemon like docker, it leaks. This may be fixed in tip. I have no idea.
Just a suggestion regarding the FAQ: What is a Standard Container?
Wouldn't it be better to simply describe it as something similar to a vm snapshot/export. If I export say a virtualbox image I can move it around and run it on other vm players.
I think the shipping container analogy is simply bad:)
[1] https://news.ycombinator.com/item?id=5408002