Hacker News new | past | comments | ask | show | jobs | submit login

> What problem is k8s even trying to solve?

> Say you want to deploy two Python servers. One of them needs Python 3.4 and the other needs Python 3.5.

Honestly hilarious. The core value prop example is if you want to run two minor version different programming languages on one machine. In order to do that, you get to deploy and configure hundreds to thousands of lines of yaml, learn about at least 20 different abstraction jargon terms, and continue to spend all your time supporting this mess of infrastructure forever.

How many engineering teams adopt kubernetes because it's what everyone's doing, versus out of genuine well-specified need? I have no idea.

I use k8s at work, i know it has benefits, but it too often feels like using a bazooka to butter your toast. We don't deploy to millions of users around the globe, and we're all on one python version (for library compatibility, amongst other things). Docker is an annoying curse more than it's a boon. How much of this complexity is because python virtualenvs are confusing? How much would be solved if instead of "containers" we deployed static binaries (that were a couple hundred mb's larger apiece because they contained all their dependencies statically linked in... but who's counting). Idk. Autoscheduling can be nice, but can also be a footgun. To boot, k8s does not have sane defaults, everything's a potential footgun.

In 10 years we're going to have either paved over all of the complexity, and k8s will be as invisible to us then as linux (mostly) is today. Or, we'll have realized this is insanity, and have ditched k8s for some other thing that solves a similar niche.

edit: I realize this is a salty post. I don't mean to make anyone feel bad if they love to think about and use k8s. I appreciate and benefit from tutorial articles like this just as much as the next dev. It's just the nature of the beast at this point, I think.




It is comforting to know others feel this way, especially the opening paragraph. Adopting complexity to solve complexity blows my mind.

What’s ironic, too, is the complexity is so far abstract from the problem it’s solving (and the problems it then introduces) that troubleshooting becomes a nightmare.

In my experience, too; few pushing adoption/application of these technologies are system administrators, or even architects/designers - or are only so fresh out of uni (which i do not mean to suggest is bad..)

The accountability becomes really muddy, not even considering security boundaries and other multi-business-level disciplines these “solutions” impact.

I get it. There’s a place for these technologies. But almost every adoption I’ve experienced to date is more of a shiny-thing syndrome.

So; thanks for this comment.


> I use k8s at work, i know it has benefits, but it too often feels like using a bazooka to butter your toast.

This is such a great description (I also for my job work exclusively on a platform based on Kubernetes). I ran K8s at home successfully for a while using K3s (such a great project/tool) to become more proficient. After a few months I found that there were so many features I didn't need for a homelab that the complexity wasn't worth it.

I feel like, at the moment, Kubernetes is awesome when you have two ingredients:

1. You go with a managed K8s offering (such as AWS EKS)

2. You have a team of engineers dedicated to the health of the K8s platform

It's pretty cool that a somewhat small platform team can wield the scaleability of a company like Google, BUT they need to be very good with K8s. And many workloads probably don't need that much scaleability.


For me, it’s more about simple redundancy. Prior to k8s, to reboot a node, you’d have to

1. Manually remove it from the load balancer(s)

2. Wait for connections to drain

3. Stop the services running so they can gracefully stop

4. Reboot the node

5. Do everything in reverse

With k8s, all that can 100% be automated, basically for “free”.


it can be automated just like it was automatable before, you know have a REST API to do the automation, that's the big change wrt being on bare metal (but you had Puppet/Chef/Ansible for almost 15 years now). If you were in the cloud, you already had a REST API to do that before.


Its very different though, you don't tell it the steps to do, you tell it the world state, kubernetes figures out how to transition between changes between states and if something breaks it tries to fix it by going back to the state its supposed to be in. I think this immutable/indempotency aspect of kubernetes is a bit underappreciated, this can't possibly be compared with ansible.


Further, the workload /just moves/ to another node. In the old ways, the service was tied to the node it was on. If a disk failed, the service failed. Now, within moments, that service moves and the node can just go away. It’s almost magic. I think the reason k8s doesn’t make much sense in a home-lab setting is due to the limited nodes.


The same magic that happens with autoscaling groups, if an instance or the app running on that instance die for any reason, it get recycled magically.


That doesn’t work for stateful apps though, which is what I was thinking more of.


IMO and IME stateful apps on k8s are still a PITA with lot of corner cases that can have really bad consequences. But as time passes, it's getting better.


Puppet was/is declarative, or at least tried to be (unlike Ansible). Don't get me wrong, I completely understand tje differences with K8S and I'd rather accept "the inner loop" as the biggest differentiator.


Ansible could do this pretty easily, no?


Yes, with Ansible you would be able to set this up. But with Kubernetes it is basically free. Kubernetes will manage your workloads and if you set up your health checks correctly it will do this for you. Kubernetes is designed for such tasks.

However. I would rather comfortably recommend Ansible for a home lab, but not Kubernetes. For something like that it is too complex with too many abstraction layers and too many caveats.


the real "killer app" of "workload orchestration" is the active control loops, configured (somewhat naturally) declaratively.

Ansible and other configuration managers and provisioning tools - as far as I know - had no way to do this. (they had policy conformance checks and whatnot, but fundamentally they did not make it easy to get to the desired good state, they ran on-demand and had to rely on ad-hoc information gathering, which made them slooooow and brittle)


The big difference in my opinion is that your ad-hoc servers using static binaries, with all the outside stuff specific to your organization, are non-standard.

For some miraculous reason K8S is ubiquitous and everybody uses it. Standardization is a boon.

People complain about git using similar arguments, and yet having a vast majority of the tech world on it is a boon for tooling.

Both those technologies are excellent but take some time to master (you don't have to be an expert though).

I can get on a project using git, kubernetes and say RoR and understand it very quickly without help. It is well bounded. It takes a git account and a kubeconfig. All set.

A custom python codebase deployed on custom servers running in big enterprise-y network, not so much.


How much would be solved if instead of "containers" we deployed static binaries (that were a couple hundred mb's larger apiece because they contained all their dependencies statically linked in... but who's counting).

Isn't that what a container is though? (static linking here meaning everything inside the container and no need to provide & manage external dependencies).

The container format has the advantage of simply being a standard linux environment, the kind of thing you work with anyway and can host any process type or application.

As for k8s, of those 100s or 1000s of lines of yaml, there's probably only 20 or so that you're interested in for a basic deployment. Tools like kustomize or pulumi make templating simple.

K8s is pretty simple under the hood. As an exercise, I once set up a cluster by hand on a couple of vms using kubeadm. It all dropped into place for me, understanding what kubelet was doing. K8s seemed a lot more sane for me thereafter.


No, the abstraction of a container involves not only namespace isolation, but also access to a certain quantity of resources to process and exchange information within and across namespaces.

So, pointing out “extra tools” (kuztomize) to ease difficulties with yet a new abstraction (k8s) is not really the way. You are just convincing yourself that your tool of choice can really solve everything.

Ideally, standard builtin tools should be easier to use, understand, and be less buggy by means of better opiniated solutions, just like some programming languages work for people who never coded before.


A container only knows about what's going on inside it though. If you want to hook those resources up with peers in a decoupled way, that requires an abstraction on top, preferably declarative, or else you faced with a substantial amount of manual (and thus costly and error-prone) management.

We could argue the merits of yaml and I'll happily concede it's not ideal, but the complexity isn't going away and the tools to manage it are pretty standard.


In 10 years Google will be a fading memory (one hopes). Once they die, K8s dies.

K8s' fatal flaw was being too big for its britches. As individual pieces, none of the K8s services are useful. Nobody downloads and installs just one of the components for use in some other project. And for that reason, it will always be just one big fat ugly monolith of fake microservices.

The architecture that will come afterward will be the model we've had since the 90s: single components that do their job well, that can be combined with other components in a hobbled-together way, that together make up a robust system. The same thing that lets you build an architecture out of PostgreSQL, Ory, Nginx, Flask, React, Prometheus, ELK, etc: they're all separate, but they can all be combined the way you want, when you want. And they're all optional.


> single components that do their job well, that can be combined with other components in a hobbled-together way

This is sort of Nomad & friends?


I can’t help but think this is already here: docker, docker-compose, and proprietary cloud products. You’re not important enough to need a load balancer, and if you are you can use paas elbs and native services. Only once your google scale does k8s make sense.


IMO this is wrong way to approach explaining this. Having Python 3.4 and 3.5 coexist is laughable example - yes. Whole post is written from perspective of a developer who has code he needs to deploy.

If one looks from operations/sysadmins perspective: - I have a big server or 10 servers - I don't want to install PHP/Python/.NET for each dev team on the server or get new server and maintain it separately for PHP teams and separate way for .NET teams - I want to provide environment where devs can just drop a container so it enables them to have what they need without me checking off each install and each VM

I also don't think dev teams should run their own k8s clusters. If you have huge company and people running data centers you don't want them to deal with whatever flavor of back-end code these servers are running.


> How much would be solved if instead of "containers" we deployed static binaries

That's pretty much what happens in the Go space, and people are still deploying k8s like crazy there, seemingly failing to consider the trade-offs involved. It's people's full time jobs to manage all of this when it really isn't needed if your ops story was just a bit simpler.

I'm a developer, so for the most part I don't really care what the ops people are doing, whatever works well for them, right? But the way k8s works and because of the "dev-ops" stuff it's also pushed hard on developers. I've worked at companies where it was literally impossible to set up a local dev instance of our app and even running tests could be a challenge, because unless you're very careful k8s gets in to everything.


Kubernetes feels like the lower level engine that some great abstraction system sits on top of, this thing with great defaults, fault tolerances, self-repairing and developer-friendly interface, unfortunately the higher level abstraction doesn't exist. Kubernetes makes that all possible and its ingenious and revolutionary, its just that its a terrible terrible experience to actually use in practice.

We need like what pytorch was to tensorflow, its clear that google has completely lost the plot again with this insane unmanageable complexity that blows up in your face every 5 minutes.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: