Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Simply put, Docker includes a bunch of UX components that Kubernetes doesn't need. Kubernetes is currently relying on a shim to interact with the parts that it _does_ need. This change is to simplify the abstraction. You can still use docker to build images deployed via Kubernetes.

Here's an explanation I found helpful:

https://twitter.com/Dixie3Flatline/status/133418891372485017...



Former Docker employee here. We've been busy writing a way to allow you to build OCI images with your Kubernetes cluster using kubectl. This let's you get rid of `docker build` and replace it with `kubectl build`.

You can check out the project here: https://github.com/vmware-tanzu/buildkit-cli-for-kubectl


That is a really good idea! Does this just schedule a one-off pod, then have that do the build?


Not quite a one-off pod, but very close to that. It will automatically create a builder pod for you if you don't already have one, or you can specify one with whichever runtime that you want (containerd or docker). It uses buildkit to do the builds and has a syntax which is compatible with `docker build`.

There are also some pretty cool features. It supports building multi-arch images, so you can do things like create x86_64 and ARM images. It can also do build layer caching to a local registry for all of your builders, so it's possible to scale up your pod and then share each of the layers for really efficient builds.


That’s pretty nice. One of the things I am curious about is how Kubernetes will deal with private “insecure” in-cluster registries (which are a major pain to set up TLS for when you’re doing edge deployments or stuff that is inherently offline).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: