I am a big believer in containerization technology from a practical standpoint. It has allowed me to create repositories that act as services. Database, search, api, admin, etc.. they are all their own service. I do not have to configure any servers this way; instead, I declare what the system ought to be and Docker makes it happen. I don't even have to configure init scripts because a proper Dockerfile will contain a start mechanism usually in the form of any other executable: `docker run your/api --port 80 --host host.company.com`.
The only thing that matters then between services is their bindings, which gives you the ability to use any programming language for any service. Deployment with ECS has been going well so far for me. My flow:
1.) Push code to GitHub
2.) GitHub tells Docker, which builds a private image
3.) Docker tells my build server
4.) Build server tells ECS to update the given service
5.) ECS pulls from DockerHub, stops service, starts service
The only thing missing is that DockerHub doesn't tell my build server what tag it just built! It builds tags like dev / staging for the given branches, but doesn't relay that info over its webhook. There's a ticket open about this already and I'm sure they'll get to it soon.
Nevertheless, I'm able to administer any system -- things like Elasticsearch, db, api -- from code on a branch. This is powerful to me because I have to administer environments for everything. Rather than do all this work with Puppet, Chef, or even Ansible, I can just declare what the systems ought to be and cluster them within code branches.
With ECS coming into picture, you're encouraged to forget you even have physical boxes in the first place. If you think of the power at your finger tips that results from this development workflow, I believe it's a no brainer for everyone to jump on board and get this as good as it can be. It's going to be a huge boon to the software community and enable more services sharing.
Rather than using Puppet etc.? Basically you've returned to before configuration management and that is somehow good?
How do you change network configuration, load balancers or other external configuration in lock step with your application? Scripting by hand?
And deployments are a just small part of configuration management. How do you find out which of your applications use libz 1.2.3 (to measure CVE applicability for example)? By looking in each and every one of them?
How do you find out which application run a cron job? And which application connect to backend x without going through the load balancer?
Which application has a client certificate about to expire? How do you guarantee two applications have the same shared secret, and change it in lock step?
With configuration management all this is just a look up away. And best of all, most of this information is authoritative.
When I come somewhere new, the use of these tools instead of a directory of miscellaneous scripts is like night and day. It literally turns a two day job into a ten minute one when the environments are complex.
Docker is great for a lot of things. It enables to do daring things to your environment even if it's so complex you don't fully understand it all because you can just shuffle containers around. But I would never use it instead of configuration management.
> Basically you've returned to before configuration management and that is somehow good?
I would argue I accomplish most configuration management via the Dockerfile. In Docker 1.7, even if I want to use a different filesystem like ZFS, I can.
I agree with you that I probably have missed many important features in my architecture. However, it's simple and works for what I got. It sounds like in the future, I'll need a combination of Puppet / Chef & Docker rather than relying purely on ECS. For now though, it's quick & easy. Thanks for the thoughts, they all make me think.
Re: '...Rather than do all this work with Puppet, Chef, or even Ansible, I can just declare what the systems ought to be and cluster them within code branches...'
It sounds like the poster is doing something similar to a 'Gold System Image' though with Container technologies.
Configuration Management is great, but I'm with the view that you should start with a combination of gold images/system builds and then stack on Cfg Mgmt on top of that.
Theory is you will have a known base and identical builds etc, otherwise you would get subtle drift in your configurations, especially for things which are not explicitly under control of Config Mgmt (shared library versions, packages).
Of course, this is not always feasible but if I were to start from scratch, I would probably try to do it this way.
You want to have your "Gold system image" available and this is most certainly what you should be deploying from; however your configuration management - whether it's Chef, Puppet, whatever - should be able to take a base operating system and get it setup completely from scratch.
This then solves the problem of ensure that you can completely reproduce your application from scratch; but also removes the possibly horrendous slow scale up time by using a "Gold image" that has already done this.
My current process is: Jenkins runs puppet/chef, verifies that the application is healthy after the run and everything is happy, calls AWS and images the machine, then it iterates over all instances in my load balancer 33% at a time and replaces them with the new image resulting in zero-downtime. Of course another solution is to pull out those instances and apply the update, and then put them back in.
And I'm sure someone else will have their $0.02 on their own process, which actually I'd love to hear :-)
Here's mine, with the preface that we're still iterating our deployment procedure as things are quite early.
I have an Ansible repo with an init task, which configures new boxen (makes sure proper files, dependencies etc. exist). Then to deploy, I have another task that ships a Dockerfile to the target boxen group (dev, staging, or prod) and has them build the new image, then restart. This happens more-or-less in lockstep across the whole group, and scaling up is relatively easy - just provision more boxen from AWS and add the IPs to the Ansible inventory file. Config is loaded from a secrets server, each deploy uses a unique lease token that's good for 5 minutes and exactly one use.
I'd love to hear how to improve this process, since I'm dev before ops. My next TODO is to move Docker image building locally and deploy the resulting tarball instead (though that complicates the interaction with the secrets server).
Is that really the state of things? It is what I concluded after looking into this a bit for Docker, but it seems incredible to me that so many companies are jumping into this idea of containerization without any good & available solutions for this problem.
One potential solution that came to mind was that if there was a standard way of deploying an application into containers, and Google/Amazon/Microsoft provided auto-updating containers, the maintenance of a secure container would be in the hands of companies who (hopefully) have the resources necessary to keep the entire stack up-to-date.
We tend to handle that part inhouse. We're using Jenkins and have it set up to build a standard set of base images with all the latest updates daily. It can be run on-demand as well.
All containers running code are based on these images, so the updates are picked up on the next build/deploy.
They seem to be betting the farm on the containerization will contain (heh) whatever security issues that come up.
This in the sense, i guess, that if they have a security flaw in their php that gives disk access, all the attacker will see is the content of the php container as the database will be on the next container over.
Then again the containerization seems to have come alongside devops, where the mantra seems to be "update early, update often, to hell with stable branches".
I've heard of that approach (breaches being limited to a container), but I don't think it makes sense.
If a security flaw exists in one container due to the stack not being updated, isn't there a pretty good chance that it also exists in the other containers?
Also, for any given container, there probably still is a way for an attacker to do immense amounts of damage. With the database container you can steal customer data. With the PHP container you can remotely instruct the database to do whatever you want, or just point the code at your own database.
When building a container, I personally look at the docker-alpine [1] image and see if I can just use that. Since it's musl instead of glibc, there are some incompatibilities with things like Oracle JVM, but it's a great runtime container for dynamic languages like Ruby and Python, especially if you need some additional libraries or helper programs.
Love it for quick scripting containers (like netcat) too since it downloads in line 2 seconds.
We've had some success with auditing containers and saving the results to a DB (for instance, running containers you can easily exec, say, 'dpkg -l'). If a package has a security issue, you know which containers have it, and you kick of your CI/CD to rebuild and redeploy with the fixed package.
I don't see how containerization changes your security strategy in this regard, except instead of ssh'ing into a system and running `apt-get update && apt-get install ...`, you `docker build && docker run`
I'd love to hear how you're handling the ECS part of this. From the experimentation I've done, it's not just a matter of telling ECS to update the service unless you're running 2x the number of container instances you need -- it won't pull resources out from the existing service revision in order to spin up the new ones.
Certainly can be worked around, but it's a little annoying.
> If you have updated the Docker image of your application, you can create a new task definition with that image and deploy it to your service, one task at a time. The service scheduler creates a task with the new task definition (provided there is an available container instance to place it on), and after it reaches the RUNNING state, a task that is using the old task definition is drained and stopped. This process continues until all of the desired tasks in your service are using the new task definition.
You're going to get some downtime with #2. I'm not sure about #1 because I don't understand how something like an API service could be run without error'ing out that the port it's trying to bind to and forward is not available. So I'm not sure how it would ever reach the running state.. but I haven't experimented going about it that way. I can handle the small amount of downtime.
Thanks! Yeah, those were the ony options I was aware of, but the "provided there is an available container instance to place it on" issue is a problem due to port availability, and I was looking for zero-downtime deploy options.
Re services sharing, we are actually hacking on a database of useful containerized services which are hosted on Github and can be run anywhere through HTTP or JsonRPC. Product is super early, but would love your feedback on what you'd like to see: www.stackhut.com . Just about to push ability to submit your own.
I think the key thing is that users shouldn't have to have a knowledge of Sysops or Devops to be able to take advantage of the benefits of containers. Most developer we've spoken to don't have time to learn a completely new paradigm, they just want the benefits.
ECS is pretty exciting, but isn't the same level of abstraction accomplished with the GCE/Fleet/k8s? Even the Kubernetes or Fleet install process on EC2 lets you quickly forget about the actual boxes.
CaaS is definitely the future of cloud. However I don't think ECS (IaaS+Container) is the way to go. In this approach, you still need to take care of the cluster capacity planning, scaling, failover, etc. And at most time, there always are some capacity sitting idle in your ECS cluster.
The evolution of CaaS would be sth. like hyper.sh. Due to its hypervisor nature, the whole cluster+scheduling thing can be made transparent to developers, who simply "compose" the app spec and submit to run.
I am a more front end heavy web developer. I know my way around apis and what not, but what you are describing sounds like backend bliss. I don't even know where to start but is there someway you can share more about the magic of your process? Like an example of a github repo of search and how you plug that into an app?
Hey, we're actually hacking on solving this exact problem at StackHut.com, which is a platform for containerised services that you can call immediately through HTTP. I think your sentiment is super-common: building microservices / container-enabled services and getting all the good stuff that comes with them is still too much of a learning curve for most devs. We're just about to push live the ability to add new services: would love to get your thoughts on what you'd like to see: leo@stackhut.com
Would love to see this as well. I know my way around the CLI but not enough to even grok what Docker does. Any good overviews/getting starteds/tutorials out there?
This is a somewhat novel workflow from what I can gather, so no. If you worked backwards from my flow though, you could figure it out no problem. Maybe I'll do a blog post in the future about it. I just don't have the time these days.
I don't understand how you configure your host-specific servicc dependencies, e.g. connection strings. Ansible (Puppet, Chef) does variable interpolation / templating for this purpose.
This is a hard problem, but outside the scope of Docker. There are numerous cluster orchestration/management systems springing up around the ecosystem. (We don't necessarily have on ready for prime-time yet, but a lot of very good work is taking place.)
Do you deploy the same docker image to each environment, and somehow retrieve the configuration files on startup? Or do you rebuild your images as you promote from dev to staging to prod?
On a side note, for microservices that expose a RESTful API I would also take a look at Kong (https://github.com/Mashape/kong) for delegating common functionality like security or rate-limiting.
Yes container images would become portable between systems, but if you hide the underlying system enough under abstraction layers what makes me choose between CoreOS or Docker or the future thing? What's the value difference?
Containers are useful if you have the build systems in source control, but if you don't, you don't know how to rebuild them or what is in them - they become dangerous in that case. They become scary "golden images".
Docker files were already very easy to regenerate things -- and I think interface wise, one of the more compelling wins. If there were other systems it's still likely they would have different provisioners.
It seems the (excuse me for the buzzword) value add then quickly becomes in the people providing management software for Docker, rather than in Docker, and Docker becomes more or less a subcommitee of a standards body.
I'm sure that's NOT true, but it's confusing why they wouldn't want to seek differentiation to me and what this means for valuation purposes.
> but if you hide the underlying system enough under abstraction layers what makes me choose between CoreOS or Docker or the future thing
From the Google/Amazon/Microsoft POV, that's part of the point.
From the CoreOS/Docker POV, the fact that the container technology that the major cloud hosts support is going to define their market whether they participate or not is probably why they have an incentive to participate.
> It seems the (excuse me for the buzzword) value add then quickly becomes in the people providing management software for Docker, rather than in Docker
Docker-the-company is probably going to be one of the "people" providing management support for the new standard container platform. From the advantage of also being one of the players defining the standard, and one of the key implementers of the basic software, which will give them an advantage in implementing management software.
> I'm sure that's NOT true, but it's confusing why they wouldn't want to seek differentiation to me
Sure, they want to seek differentiation. But they probably don't want differentiate themselves as "the vendor of the container technology that isn't backed by any of the major cloud hosts", when there is a container technology that is backed by those hosts.
> "Docker-the-company is probably going to be one of the "people" providing management support for the new standard container platform"
You would think, but it seems Kubernetes, CoreOS, ECS, and Mesos are the obvious picks right now for management stacks. Docker could drop the shiny UI on top of swarm/other soon, sure, and then they could build off the name-recognition, but they are definitely letting everyone have a running start. There's something said for last-mover advantage, but these types of apps are also non-trivial in storage/networking/other areas and somewhat hard to displace.
Obviously I am biased (working on Docker), but I thought I would mention a few facts.
- Today AWS announced they would support Docker Swarm and Compose natively on ECS.
- Last month devops.com surveyed development and ops teams about their usage of containers. Their conclusion was: "While this is still a nascent market, some early leaders did emerge from the responses. Docker Swarm, the orchestration technology from Docker itself, was the clear winner, with nearly 50% of respondents indicating that they planned to investigate Swarm. Close behind were Kubernetes and Mesos."
Personally I think the important part of that quote is "this is still a nascent market". We are the very, very beginning of these sorts of platforms, and nobody really knows what these platforms will evolve to be in a few years. So I really don't think it matters who has the most "market share" of the 0.00001% of IT organizations actually aware of these tools. The innovation is only just getting started.
>Yes container images would become portable between systems, but if you hide the underlying system enough under abstraction layers what makes me choose between CoreOS or Docker or the future thing? What's the value difference?
Obviously tooling and support?
(It's analogous to asking what makes you chose between programming editors since they all edit text files? Or what makes you chose between distros, since they all run Linux and the usual userland)
Bu commoditising the docking point e.g. AWS/Google/Azure to whom rent is paid is not the same as buiding a distro for distribution which will be to the builders particular idiom.
I choose a distro based on the package manager - I'm used to Debian so I generally prefer apt based distros to play with.
I prefer BSD over Linux and plan9 over all of them but the are not fungible.
I do pay my supplier a monthly fee to host my container, and for that the metrics is (bandwidth + speed + storage space) / price
More so asking about business/product value, given Docker's investment levels (congrats on this) and that other companies are seemingly doing container management better and investing more labor here. This seems to leave Docker with (A) DockerHub or (B) a foundation for profit options. Both might be totally valid, but it's unclear.
Sorry for crossing streams.
That standardization makes it easier for the orchestration companies and clouds is obvious. I'm just legitametly curious what this means for Docker, Inc and the business model, since it seems to be seeding the lower end, and they haven't invested in the upper end as much -- Docker itself not being a tremendously large amount of plumbing, and all OSS, it's easy to replicate. So what they have is basically support and the leadership of that community.
As right now, this reads like I'll be able to use everything on Mesos/CoreOS/ECS and just swap out a backend, it's unclear why I would want to pick things from Docker Inc. It's like I get pluggable tooling where all the frontends can speak to backends and the image format is the same -- so it seems differentiation would have to happen at the top in the tooling, which is weird seeing efforts have gone into the bottom end and other companies have done a lot on the top.
Perhaps some messaging to address. Perhaps there's enough funding that this isn't a concern even for the next five years. I don't know, but I'm curious. It's useful to know this to tell where container-land is going, and it's an uncertain time in which management orchestration software to pick for running Docker clouds. (We can probably guess ECS is going to be around, roadmaps of others subject to speculation)
Mostly because I find the evolution of tools in this space interesting.
If you're interested in Docker's opinion on this topic, I recommend that you watch today's keynote. That's where we introduced runC and where we explain why.
They're building a playing field and funding teams in the hopes you will come and buy tickets, popcorn, beer, novelty hats, etc. The value add to them is they have created an entirely new marketplace in which to sell you crap. The value add for you is simply that all the teams will play on the same field.
This is great news and a shift in the way business is traditionally done. If containers were a thing 20 years ago there would be fierce vendor lock-in and patent/lawsuits flying everywhere. People would chose which cloud platform to deploy based upon which tools they prefer.
Docker has fundamentally changed the way they think of the way they fit in the tech eco-system. Instead of selling a set of containers that only work with their tools they've opened up the platform strengthening their position as the go-to solution for management. Prudent move on their part. It limits their potential market cap but solidifies them as an entrenched member for the foreseeable future.
These are great times! Companies learnt that alone/closed/private doesn't drive innovation... You can see in the latest news many standards been made by the companies together... Working with docker for dev-environment has been fantastic (for me), really fast and easy to start/stop/modify different setups.
Companies don't drive innovation. Innovation is just the side product of the pursuit of profit. They fill a 'need' in society and get rewarded. If this just so happens to be innovative then so be it.
I was going to ask why IBM weren't in, but read on to see that it's a general Linux Foundation collaboration, and so naturally they're part of it.
So I guess we're going to have libcontainer support for AIX Workload Partitions and OS/400 LPARs? It's gonna be interesting to see just how big the Docker libs become.
The proof is in the pudding. Overall this is very positive for the ecosystem as a whole, and glad to see them all come together. But I thought a big selling point of a standard means it's written down, currently the spec returns a 404 on github [1] seems like a lot of unknowns on what's actually being proposed.
It’s confusing why the App Container (appc) spec which is written down [2] and has maintainers from RedHat, Twitter, Google, Apcera, CoreOS [3] is not being promoted - what's the new OCP standard offering that isn't in the appc spec?
Consider that appc was basically a tactical move from CoreOS to force Docker to play ball and address concerns they had, or face unwelcome competition.
OpenContainers addresses one of the fundamental issues already with the initial "runc" tool - namely the issue of playing nicely with Systemd and not requiring a separate daemon.
It also appears to be headed towards addressing a second one: A spec.
It would also seem that what's there so far with 'runc' could easily be used as part of an appc implementation easily enough, and conversely, there seems to be little reason why this couldn't be supported by appc implementations (or others, like LXC).
So what we're (hopefully) getting is a compromise that lets people compete on value adds rather than fighting over the basics.
>But I thought a big selling point of a standard means it's written down, currently the spec returns a 404 on github [1] seems like a lot of unknowns on what's actually being proposed.
To clarify, the Open Container Project was started by Docker with the help of the Linux Foundation. Then other vendors were invited (including AppC maintainers). We did this because there was a clear demand for transforming a de-facto standard (the Docker format) into a proper standard (OCF), and for opening the governance of our runC implementation.
Since AppC is a completely different format from what Docker uses, starting from that would have defeated the purpose. However it made a lot of sense to invite the people behind AppC to join, so that we could all build a better spec and implementation together, instead of arguing on technical details that don't matter.
Serious question. We have a master DB and a slave, two memcache servers and 3 webservers behind a load balancer. We're not a public-facing company and so have no reason to be building for "web scale" or whatever, we're well within capacity.
Deploying new code (happens weekely) is just as simple as clicking one deploy button in our version control system (which does a "git pull" on the web servers). DB changes (which are very rare, once or twice a year) we run manually. The cache servers never change. All of the server run automated security updates on the OS. Otherwise we upgrade non-essential packages every few months.
Is there a way that using Docker could make things better for us? I feeling the "you should be using Docker" coming at me from every angle. Our deployment is certainly not very sexy but it is simple and doesn't take a major amount of effort. Is there a use case for a company like mine?
If your server configurations change for some reason in the future (sounds unlikely) it might be easier to start from scratch with Docker than manually adjust the servers; but that all depends on your situation.
Not really. If your master burns and dies, your recovery or promotion process on the slave is probably manual. That's OK, but you could make it automated. Docker wouldn't actually solve the hard bits, but would possibly make the automation easier.
This is my fear as well.. I mean look at "OpenStack" it's horrible, convoluted, excessively complicated and pretty much a pain in the ass.
As it stands, docker represents a very clean level of abstraction for application/data containers... combined with any number of management layers it works... but the container layer itself is simple to use in practice. Bringing windows into the mix will only muddle things a lot.
I do appreciate Joyent's work on bringing linux compatibility to SmartOS for Docker support, but my hope is that MS doesn't really screw things over here... which I really see happening given how alien the OS is compared to linux. Unless MS creates its' own common-core userspace, which combined with their efforts on opening .Net up could happen, I don't see windows applications fitting in with the rest, and it would create too divergent paths for continued success.
Who knows how this will shake down, I've been a fan of lxc/docker from early on, and I use windows on my desktop, but this just scares me.
You're right. Given that it's basically a tarball with some metadata, there's going to be a huge tendency to bike-shed it... and I hope we can avoid that.
What does this mean for vendors like VMWare that want VMs to be the unit of deployment that developers interface with?
Seems to me that VMWare's VM management technology is still needed, but the clock is now running on how long it will be before their part of the stack is irrelevant, as all the smarts move into the container-management layer.
I'm at DockerCon and VMWare has a booth. They demoed some cool stuff. Like using docker as the interface to some of their stuff. It wasn't pure containers, but they had non-linux "containers" running (DOS was their demo). Powered by VMWare but the interface was Docker. It even supported commits, pulls, and pushes. That will be a good "polyfill" for platforms that aren't supported by Docker (yet). It seems to me that VMWare is looking for ways to stay relevant long term.
Well, VMs are what makes it possible for a service like AWS to spin up an instance for you. They'll be needed, but you're right, developers won't look to them as solutions for their deployment problem.
Everything gets shoehorned into the one tag. Originating repo, image name, version. So next time you pull "x/y:latest", the current holder of that tag loses everything. You see an image in the list with nothing listed - no repo and no image name, when it should just lose the 'latest' tag. If I have multiple images on a machine, I now can't tell which are the old images from a particular repo (well... I can guess by image size), and that's not great if I want to roll back. There is no reason for an image to automatically lose a tag describing where it came from.
It also means you have to tag twice using the full tag if you want build numbers: this build is "x/y:latest" and "x/y:v1.2.3", when you could just do "latest" and "v1.2.3". Similarly, when you pull an image, it should pull all tags associated with that layer, so you pull 'latest' and it also brings the tag 'v1.2.3'. This seemed to be the case with the docker v1.5, but it seems inconsistent in 1.6. I haven't had time to nail down that suspicion, though.
There are other bits that could do with polish (like being able to do multiple tags at once, rather than push afresh for each one), but the main problem is the single tag field. Given the amount of metadata they already store for an image, this seems a strange behaviour to suffer.
Weird to see Microsoft in that list. On a related note, will this new container standard support non-Linux kernels? Would be nice to be able to run containers directly on OS X without having to go through the boot2docker VM.
Microsoft probably wants people running containers on HyperV/Windows Server machines, the way they currently contribute patches to Linux so people can run Linux VMs on HyperV/Windows Server.
I'd say it's unusual for MS to help open standards given their usual attitude and history, but if you think of it, they have no edge in containers competition. I.e. it's more likely for them to participate when they have no hope for lock-in. Good luck with expecting their participation in other situations (for instance contributing to Vulkan when their DirectX is very dominant already).
Yes, but Docker is an application distribution format that essentially bundles parts of the server the app needs to run on with it so they can run within a container on the host machine. Didn't Microsoft solve this application bundling with .NET before?
Microsoft is going have containers for Windows Server. It looks like it would would with Docker's client as well.
"Last October, Microsoft and Docker, Inc. jointly announced plans to bring containers to developers across the Docker and Windows ecosystems via Windows Server Containers, available in the next version of Windows Server."
Containers in themselves offer little additional value to virtual machines. Don't switch over just because of the hype. Evaluate it yourself! (I personally LOVE vagrant!)
It points out, in the typical form of humor, that we're going to take a N wheels (the container spec in this case), try and shove them together, to get N+1 wheels. And this is relevant in a space with at least 4 competing standards (and one dominant one).
In the short term, what Docker uses with will matter more than the output of any such committee. In the long term, well, it depends on Docker-the-company's fate.
Many container services currently support starting Docker images, and with all of the money spent on marketing, Docker also owns the container ecosystem mindshare. It's their game to lose.
Aside from Docker containers (which are broadly consumed by competing container engines), rkt containers, the systemd "Container Interface" specification, and the LXC containers, you're right. There's no standard.
A per-container configuration file, stored in the container directory. Similar to the "tarball with metadata" of Docker, but you'll have to roll your own layers (or do it the old fashioned way & hardlink common files between containers).
Systemd:
You're right on the container interface front, my apologies. However:
specifies an image format (basically a bootable ISO or some such). It's not as self-contained as a docker image (still need to specify a bunch of parameters), but it's available.
systemd itself has its own system container tool called nspawn, though again like LXC it doesn't have any image format, serialization or anything of the sort. rkt originally used nspawn as its backend (still has it in, I think).
That's just wrong. Please have a look at the LXC or Systemd nspawn websites [1] or our guide to container technology [2]
LXC is not 'low level' capabilities as Docker erroneously refers to it on its website or a 'low level api'. [3]
The LXC project in development since 2009 on which Docker was based, and now Systemd-nspawn give you pretty advanced OS containers with mature management tools, full stack linux networking, multiple storage options including support for btrfs, zfs, LVM, Overlayfs, cloning, snapshotting and a wide choice of container OS templates.
It also offers unprivileged containers that let's non root users run containers, which is a big step forward for container security, and is currently working on live migration of containers.
Docker takes that as a base, limits the container OS template to a single app, builds the containers with layers of aufs and enforces storage separation. [4] It's not rocket science, you can do this yourself with overlayfs and LXC in minutes. [5]
Docker is an opinionated way to use containers. You don't need to adopt Docker to get the benefits of containers. You adopt Docker to get the benefits of Docker. A lot of the messaging, hype and marketing conflates the 2 and it suits the docker ecosystem to do that but it does not benefit informed discussion.
Nspawn is a systemd container project, which is similar to LXC in that it gives you OS containers, not layered single app containers. Rkt is based on Nspawn if I am not mistaken.
It's curious that LXC project largely responsible for the development of most of the container technology available today is not part of this. I feel most folks who can move beyond the wild misconceptions floating around to try LXC will find it's significantly simpler to use.
We provide a lightweight VM image with a complete LXC environment based on Alpine Linux for those who are interested. It's 80 MB and available for Virtualbox, VMWare and KVM at https://www.flockport.com/flockbox
Disclosure I run flockport.com which provides an app store for servers based on LXC.
It seems you're referring to LXD+LXC as just LXC. That's fine, but I was referring specifically to LXC itself, which is considered low-level (even by their creators).
As for nspawnd, it's certainly a good tool, but as far as I know it doesn't define a container format, which is what we were discussing.
No, I was referring to LXC. Did you have look at the links to the LXC website and the container overview? There are plenty of 2 minute screencasts there that show LXC in action.
The LXC project does not refer to itself as low level, but ironically Docker which knows exactly what LXC is, does. And the results are these needless misconceptions, which is a tad unfair to the developers of the LXC project. Why would anyone try LXC if they think its 'low level'?
LXD extends LXC to add a REST api, multi-host container management and soon live migration.
LXC is a full fledged userland project and has been since 2009 when it began development. It is also responsible for driving the development of a lot of the kernel capabilities mainly 'cgroups' and 'namespaces' needed to support containers.
That's the 'low level' capabilities that LXC, Docker. Nspawn use to give you containers, but it would be as inaccurate to refer to Docker as 'low level' because it uses these, as it would LXC.
A container is simply a file system in a folder that gets booted and works on any underlying Linux filesystem so I am not sure about how conceptually a 'container format' fits there? That's one of the great things about containers. You do not need to think about storage. Simply zip the container folder and move across servers. Containers are completely portable across any Linux system today. But if you are going to use aufs or overlayfs layers to build single apps containers with constrained container OS templates then perhaps there is a need for a format.
When I said they referred to it as low-level, I wasn't just making shit up. From Stéphane Graber's (LXC lead) blog:
"Instead, LXD is our opportunity to start fresh. We’re keeping LXC as the great low level container manager that it is. And build LXD on top of it, using LXC’s API to do all the low level work. That achieves the best of both worlds, we keep our low level container manager with its API and bindings but skip using its tools and templates, instead replacing those by the new experience that LXD provides."
> A container is simply a file system in a folder that gets booted and works on any underlying Linux filesystem so I am not sure about how conceptually a 'container format' fits there?
Well, then maybe you should read TFA? It's all about the creation of a standard 'container format'.
Hi, I think we are playing with words and context here so I will leave it here.
The quote is in a context and says LXD uses LXCs low level api. It does not say 'LXC is a low level api'.
How does one conclude that from the quote and take it out of context without misleading readers? Any one with a remote awareness of LXC will know that's erroneous. Do you still think your comment was accurate?
Have you used LXC, are you familiar with LXC beyond the quote, are you interested in having an informed discussion on LXC and containers? There has be a basis for informed discussion.
I don't know how else I can read "We’re keeping LXC as the great low level container manager that it is" as anything other than LXC being a low-level tool.
No, from what I can tell, it's actually to launch systemd itself inside a container. For example, from the document:
To allow systemd (and other code) to identify that it is executed within a container, please set the $container= environment variable for PID 1 in the container to a short lowercase string identifying your implementation. With this in place the ConditionVirtualization= setting in unit files will work properly. Example: "container=lxc-libvirt"
The only thing that matters then between services is their bindings, which gives you the ability to use any programming language for any service. Deployment with ECS has been going well so far for me. My flow:
1.) Push code to GitHub
2.) GitHub tells Docker, which builds a private image
3.) Docker tells my build server
4.) Build server tells ECS to update the given service
5.) ECS pulls from DockerHub, stops service, starts service
The only thing missing is that DockerHub doesn't tell my build server what tag it just built! It builds tags like dev / staging for the given branches, but doesn't relay that info over its webhook. There's a ticket open about this already and I'm sure they'll get to it soon.
Nevertheless, I'm able to administer any system -- things like Elasticsearch, db, api -- from code on a branch. This is powerful to me because I have to administer environments for everything. Rather than do all this work with Puppet, Chef, or even Ansible, I can just declare what the systems ought to be and cluster them within code branches.
With ECS coming into picture, you're encouraged to forget you even have physical boxes in the first place. If you think of the power at your finger tips that results from this development workflow, I believe it's a no brainer for everyone to jump on board and get this as good as it can be. It's going to be a huge boon to the software community and enable more services sharing.