My contrarian view is that EC2 + ASG is so pleasant to use. It’s just conceptually simple: I launch an image into an ASG, and configure my autoscale policies. There are very few things to worry about. On the other hand, using k8s has always been a big deal. We built a whole team to manage k8s. We introduce dozens of concepts of k8s or spend person-years on “platform engineering” to hide k8s concepts. We publish guidelines and sdks and all kinds of validators so people can use k8s “properly”. And we still write 10s of thousands lines of YAML plus 10s of thousands of code to implement an operator. Sometimes I wonder if k8s is too intrusive.
K8S is a disastrous complexity bomb. You need millions upon millions of lines of code just to build a usable platform. Securing Kubernetes is a nightmare. And lock-in never really went away because it's all coupled with cloud specific stuff anyway.
Many of the core concepts of Kubernetes should be taken to build a new alternative without all the footguns. Security should be baked in, not an afterthought when you need ISO/PCI/whatever.
> K8S is a disastrous complexity bomb. You need millions upon millions of lines of code just to build a usable platform.
I don't know what you have been doing with Kubernetes, but I run a few web apps out of my own Kubernetes cluster and the full extent of my lines of code are the two dozen or so LoC kustomize scripts I use to run each app.
I run my own cluster too, it is managed by one terraform file which is maintained on GitHub [0]. Along with that I deploy everything on here with 1 shell script and a bunch of yaml manifests for my services. It's perfect for projects that are managed by one person (me). Everything is in git and reproducable. The only thing I am doing unconventional is that I didn't want to use github actions, so I use Kaniko to build my Docker containers inside my cluster.
If you're using a K8S cluster just to deploy a few web apps then it's not really a platform that you could provide to an engineering team within a medium-large company. You could probably run your stuff on ECS.
While I love ECS you're not giving k8s enough credit. Nearly every COTS (common off the self) app has a helm chart, hardly any provide direct ECS support. If I want a simple kafka cluster or zookeeper cluster there's a supported helm chart for that, nothing is provided for ECS, you have to make that yourself.
> If you're using a K8S cluster just to deploy a few web apps (...)
It's really not about what I do and do not do with Kubernetes. It's on you to justify your "millions upon millions lines of code" claim because it is so outlandish and detached from reality that it says more about your work than about Kubernetes.
I repeat: I only need a few dozen lines of kustomize scripts to release whole web apps. Simple code. Easy peasy. What mess are you doing to require "millions upon millions" lines of code?
Please don't deflect the question. You claimed you need millions and millions of LoC to get something running on Kubernetes. I stated the fact that I have multiple apps running in my personal Kubernetes cluster and they only require a couple of dozen lines of Kustomize. You are the one complaining about complexity where apparently no one else sees it. Either you're able to back up your claims, or you can't. I don't think you can, actually, and I think that's why you are deflecting questions. In fact, I'd go as far as to claim you have zero experience with Kubernetes, and you're just parroting cliches.
You're both using hyperboles that don't match the reality of the average-sized company using Kubernetes. It's neither "millions upon millions of lines of code" nor "just a few dozen lines of kustomize scripts".
I think they're more getting a k8s requiring a whole mess of 3rd party code to actually be useful when bringing it to prod. For EKS you end up having coredns, fluentbit, secrets store, external dns, aws ebs csi controller, aws k8s cni, etc.
And in the end it's hard to say if you've actually gained anything except now this different code manages your AWS resources like you were doing with CF or terraform.
Everything we run our workloads on is based on millions of LoCs, whether it's in the OS, in K8S, in is built-in or external kinds. If you decide to run K8S in AWS, you'll be better of using Karpenter, external-secrets and all these things as they will make your life easier in various ways.
kubeadm + fabric + helm got me 99% of the way there. My direct report, a junior engineer, wrote the entire helm chart from our docker-compose. It will not entirely replace our remote environment but it is nice to have something in between our SDK and remote deployed infra. Not sure what you meant by security; could you elaborate? I just needed to expose one port to the public internet.
Argo CD, Argo Rollouts, Vault, External Secrets, Cert Manager, Envoy, Velero, plus countless operators, plus a service mesh if you need it, the list goes on. If you're providing Kubernetes as a platform at any sort of scale you're going to need most of this stuff or some alternatives. This sums up to at least multiple million LOC. Then you have Kubernetes itself, containerd, etcd...
To me, it sounds like your company went through a complex re-architecturing exercise at the same time you moved to Kubernetes, and your problems have more to do with your (probably flawed) migration strategy than the tool.
Lifting and shifting an "EC2 + ASG" set-up to Kubernetes is a straightforward process unless your app is doing something very non-standard. It maps to a Deployment in most cases.
The fact that you even implemented an operator (a very advanced use-case in Kubernetes) strongly suggests to me that you're doing way more than just lifting and shifting your existing set-up. Is it a surprise then that you're seeing so much more complexity?
> My contrarian view is that EC2 + ASG is so pleasant to use.
Sometimes I think that managed kubernetes services like EKS are the epitome of "give the customers what they want", even when it makes absolutely no sense at all.
Kubernetes is about stitching together COTS hardware to turn it into a cluster where you can deploy applications. If you do not need to stitch together COTS hardware, you have already far better tools available to get your app running. You don't need to know or care in which node your app is suppose to run and not run, what's your ingress control, if you need to evict nodes, etc. You have container images, you want to run containers out of them, you want them to scale a certain way, etc.
I tend to agree that for most things on AWS, EC2 + ASG is superior. It's very polished. EKS is very bare bones. I would probably go so far as to just run Kubernetes on EC2 if I had to go that route.
But in general k8s provides incredibly solid abstractions for building portable, rigorously available services. Nothing quite compares. It's felt very stable over the past few years.
Sure, EC2 is incredibly stable, but I don't always do business on Amazon.
At first I thought your "in general" statement was contradicting your preference for EC2 + ASG. I guess AWS is such a large part of my world that "in general" includes AWS instead of meaning everything but AWS.
My contrarian view is that EC2 + ASG is so pleasant to use. It’s just conceptually simple: I launch an image into an ASG, and configure my autoscale policies. There are very few things to worry about. On the other hand, using k8s has always been a big deal. We built a whole team to manage k8s. We introduce dozens of concepts of k8s or spend person-years on “platform engineering” to hide k8s concepts. We publish guidelines and sdks and all kinds of validators so people can use k8s “properly”. And we still write 10s of thousands lines of YAML plus 10s of thousands of code to implement an operator. Sometimes I wonder if k8s is too intrusive.