Hacker News new | past | comments | ask | show | jobs | submit login
Hypernetes: Bringing Security and Multi-Tenancy to Kubernetes (kubernetes.io)
109 points by scprodigy on May 24, 2016 | hide | past | favorite | 29 comments



They mention that this helps cure some issues with regards to resource sharing / memory usage, etc. But does each VM still have a static allocation of memory?

One of the main benefits I have now is that if I run a number of containers that all take various amounts of memory, I can just throw them on and they share memory amongst each other quite efficiently. If I have to make a static allocation of memory for a VM, I'll typically choose a conservative memory number, and usually under-utilize the machine, wasting a lot of memory per-instance. Not so bad since I chose per-pod, but still an issue.

As it happens, this same issue is why I'm leaning towards lightweight native applications these days instead of an aggressive greedy virtual machine that grabs a bunch of heap. Golang/Rust in particular.


Actually, scaleup is pretty easy for both VM and Linux container. But scale-down is very troublesome for both.

And the scheduler will need the mem size (not right now, but inevitable)


That is what rkt with kvm is essentially doing as well, correct?

https://coreos.com/rkt/docs/latest/running-lkvm-stage1.html


Not quite - rkt will bring up a new VM for each container, this approach only brings up a VM per pod (ie, a set of functionally related containers).


This isn't correct. rkt does a VM per pod.


Yes, this is a similar concept. A pod (1 or more containers) is wrapped in a VM. I believe Hyper uses Xen and rkt uses lkvm.


No, it is HyperContainer.

https://github.com/hyperhq/hyperd


This really does not appeal to me at all. The major point of docker containers is not the image format, it's that the kernel can allocate resources more intelligently. VM images work just fine for "shippable images."

What I'd rather see is an allocation layer for physical resources that just cordons off the whole machine (physical or virtual) by tenant as soon as previous tenant resources have been fully consumed, then reclaims hosts after usage subsides. So as a provider I still only have one cluster to manage, but as a consumer I still don't worry about another layer of abstraction slowing things down or pre-allocating resources.


I'm interested in the economics of (docker) containers vs. virtual machines. Containers can run within a VM, but a VM can only run within a hypervisor.

Currently, if you want to resell computing resources, you need to rent or buy a dedicated server, and run a hypervisor on it.

Containers enable a new class of reselling computing resources. Because you can run a container within a VM, you can resell computing capacity on a VM.

I think we are going to see another abstraction on top of "the cloud," due to this additional layer of reselling (new russian doll on the inside, new doll on the outside).

The physical abstraction is:

Datacenter > Floor Space > Server Rack > Server

The virtual abstraction is:

Server > VM > Container > Kubernetes|MESOS|...

Virtual is a 1:1 inverse of physical. Next step is datacenter interconnections (i.e. multihost kubernetes or whatever flavor of the month IaaS software people use).


People have been reselling containers without needing an intermediary VM abstraction forever; look at any cheap VPS host offering OpenVZ-based "virtual machines"—which are actually [resource-quota'ed] containers.


It's not a question of need; it's a question of ease of opportunity. It's now easier to virtualize via container, and there are more opportunities, since it's easier to get a VM than a dedicated server with hypervisor access.


So we end up with:

HW -> EC2 scheduler -> VM -> You -> Container scheduler -> Container -> App

There should not be two schedulers if container is secure for multi-tenancy.


Can someone clarify how this compares to LXD, released by Canonical in Ubuntu 16.04? A lot of the keywords and concepts seem similar.


The idea is to integrate LXD into OpenStack as a "lightweight VM". So, LXD runs a system with a full init system and executes them inside of a Linux Container. Another term for this is a "system container".

Kubernetes is designed for running Application Container images (e.g. docker, oci, appc) and in this case those containers are running not as Linux containers on a shared Kernel but inside of a VM with Xen/Hypernetes. Another implementation of this "app container in a VM" concept is Clear Containers which is available in rkt[1] and can, like Hyper, run the application container image formats (docker/appc).

This is all a bit confusing so let me know if this helps :)

[1] https://coreos.com/blog/rkt-0.8-with-new-vm-support/


They're pretty much the exact opposite of each other. LXD is something that runs things that look like VM images when they're actually running the same kernel - this is creating full (albeit lightweight) VMs to run container images.

The broad approach isn't new (https://clearlinux.org/features/clear-containers describes Intel's Clear Containers work which allows you to use rkt to run containers in individual lightweight VMs), but the trick in this case is to perform the isolation at the pod level (ie, a collection of functionally related containers) rather than the individual container level.

(Edit: Apparently the Docker support for Clear Containers hasn't gone upstream, so dropped the reference to it)


And another different side of HyperContainer is that it follows OCI spec, check the runv project here: https://github.com/hyperhq/runv/, so technically speaking, it's a hypervisor version of OCI, just like docker is a linux container version of OCI. Seems rkt/clear linux or LXD does not.


The OCI Runtime spec is still a work in progress. Docker and HyperV are implementing parts of the current pre-release versions of the OCI Runtime spec.

You can use OCI Runtime bundles with rkt if you use the oci2aci[1] tool and rkt will be a full OCI runtime once the OCI Runtime and Image spec mature. We can use help to get these OCI projects[2][3] to v1.0 if you can spare cycles!

[1] https://github.com/huawei-openlab/oci2aci

[2] https://github.com/opencontainers/runtime-spec

[3] https://github.com/opencontainers/image-spec


Actually it's runc that is the linux container version of OCI. Docker is a higher-level abstraction which calls runc by default but can call any OCI-compliant runtime, including runv.

See https://blog.docker.com/2016/04/docker-engine-1-11-runc/


It will be great to see if I can use dockerd start runv containers!!!!


LXD = VM-like Linux container HyperContaineer = (Docker) Container-like ultra-light VM

Make sense?


Awesome guys, that helps a lot. Thanks for the info and links.


Speaking of security on Kubernetes, it's worth noting that most of the "Getting Started" guides (e.g. [0]) to help you set up a cluster result in completely unauthenticated API servers.

This means that by default, anyone can do anything they want with your cluster.

There are no warnings, no suggestions that turning on the much better TLS based authentication would be a good idea (or even how to do it), no nothing.

Be very careful with Kubernetes.

[0]: http://kubernetes.io/docs/getting-started-guides/ubuntu/


Maintaining community-led documentation is a hard and time consuming process, especially with a young project. I encourage you to get involved if you have a few free cycles.

Sometimes you have to take a stance on these types of things, as we have done with the CoreOS + Kubernetes community guides [0]. The guides are open source, but full TLS, passing conformance tests, etc is required for contribution.

(I work at CoreOS)

[0]: https://coreos.com/kubernetes/docs/latest/#installation


The CoreOS documentation is a lifesaver (even when setting up k8s on a non-coreOS system. Thanks a lot (and agree with the comments on documentation/involvement).


At a brief glance, this looks comparable to Magnum:

- containers - openstack - multitenancy


Totally not!


Aha, the chancellor(https://www.youtube.com/watch?v=PivpCKEiQOQ) will feel great to see that finaly he can eliminate IaaS/vms and use docker in production env.


Anyone knows whether there's similar thing for Mesos ?





Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: