Docker images contain a filesystem for an operating system, minus the OS kernel. This project uses Docker to build a tiny OS, extracts all the files out of the Docker image, adds a small OS kernel and re-packages that as a VM image.
Docker is not intended to replace VMs. It provides lightweight isolation for processes within a host but shares a kernel between processes, and some handy tooling for building images that works well with version control. (Building a CentOS based image is under a dozen LOC in Docker, and a few hundred LOC in Packer+Kickstart.)
Kata Containers (https://katacontainers.io) is more along the lines of a "VM replacement", although it is doing so using VMs.
You can tighten containers but at the end of the day they are running as native processes on the same kernel. Any vulnerability and game is over. VM offers an easier (maybe heavy) mental model of security. Between guests and between guest and host.
A jail breakout, it being from a process namespace or a VM, is always a security-risk - whatever it's breaking out of. Both are sensitive to this. VM's are maybe a bit more mature and handle some stuff on hardware - but given the recent Intel oops thingies - I wouldn't rely on that too much...
"Containers are less secure" is just FUD. That VM's or containers alike are running on the same CPU is currently a much more real threat.
The docker daemon itself - sure, but on OS/kernel level, they're doing exactly the same thing, where docker is probably the more scrutinized implementation out-there...
I assume it's mostly to make it easier for people to test this out with existing stuff. Docker containers are the standard for taking a base image and adding stuff on top of it.
Working with VM images was PITA, I don't know if anything changed. Having recipe (Dockerfile) and required files in VCS is useful as reference even when setting up bare metal machines. So Docker (and Puppet/Ansible) might have bigger impact on work organisation than on anything else.
You still have the Dockerfile and the required files in a VCS, but now what you have running is a full VM instead of a somewhat isolated process sharing kernel with other containers.
So far so good, for individual/standalone containers. But if you need thightly integrated containers (sharing networks, volumes, ports and so on) things may be a bit more complicated. And not sure about Kubernetes. YMMV
It's not such a PITA anymore. Most distros have automated build systems to create VM images from a config file. For example, we use FAI + Ansible to entirely automate the creation of a Debian AMIs for AWS, deployment, and provisioning.