* A free, public mirror (comparable to pypi or the ubuntu mirrors) makes docker instantly more useful. You're 1 command away from sharing a ready-to-use image with the whole world, or trying someone else's image.
* Docker definitely also needs a mechanism for moving images in a peer-to-peer fashion as you see fit - ala git pull. This is actually more work to get right, but we are working on it (and pull requests are always welcome :). Any docker will be able to target any other docker for push/pull.
I can kind of see your point for the "instant gratification" of a public repository... but I'd love to see docker become some kind of standard *NIX type tool for packing up blocks of software ready to move them around, and you don't see many simple tools backed to some third-party infrastructure. (don't get me wrong - I'm not totally against it.. I just want this to work, and work forever without being tied to anything)
Also as lover of Go's idea to do away with a centralized repo for pkg management, it would be nice to see a similar approach taken for the handling of Standard Containers.
I'm very excited by all this. It feels like such a step in the right direction for all kinds of deployment problems.
So, you might not read this in a while... But after thinking about this some more, I think Go's decentralized approach to package management is actually a great model. So, expect something very similar in the future :)
Always happy to discuss further on #docker@freenode!
I don't know much about git, but wouldn't it be possible to just build the push/pull mechanism on top of git? As git works, why not just use it already?
That is the first thing we tried [1]... It turns out it's not super practical. In theory a root filesystem is just another set of files.. but git doesn't support posix metadata, for example. You have to wrap or hook it to preserve permissions, uid, gid, as well as unix sockets, device files etc. Half-way through your hand-made implementation of a file-metadata-serialization tool... You realize, yeah, that's basically what tar is.
Another problem is that in our particular use case (tracking changes to a root filesystem) the space and bandwitdh savings of binary diffs are not worth the overhead in execution time. Downloading an image is as easy as extracting a tar stream. 'docker run' on a vanilla ubuntu image clocks at 60ms. 'docker commit' is as simple as tar-ing the rw/ layer created by aufs. How long would a git checkout of that same image be? What about committing? Not worth it.
* A free, public mirror (comparable to pypi or the ubuntu mirrors) makes docker instantly more useful. You're 1 command away from sharing a ready-to-use image with the whole world, or trying someone else's image.
* Docker definitely also needs a mechanism for moving images in a peer-to-peer fashion as you see fit - ala git pull. This is actually more work to get right, but we are working on it (and pull requests are always welcome :). Any docker will be able to target any other docker for push/pull.
Hope this helps.