I've been an early adopter of docker. Used Compose when it was still called Fig, used and deployed kubernetes beta up to version 1 for in-house PAAS/heroku like environment.
Must say I do miss those days when K8s was an idea that could fit in your head. The primitives were just enough back then. It was powerful developer tool for teams and we used it aggressively to accelerate our development process.
K8s has now moved beyond this and seems to me to be focussing strongly on its operational patterns. You can see these operational patterns being used together to create a fairly advanced on-prem cloud infrastructure. At times, to me, it looks like over-engineering.
Looking at the borg papers, I don't remember seeing operational primitives this advanced. The develop interface was fairly simple i.e this is my app, give me these resources, go!
I know you don't have to use this new construct but it sure does make the landscape a lot more complicated.
I agree that this new construct makes the landscape even more complicated, but I disagree that k8s has reached the point of over-engineering. Most of the parts of k8s are still essentially complex to me -- they're what you'd need if you wanted to build a robust resource pool management kind of platform.
Ironically, the push to "simplify" the platform with various add-on tools is what is making it seem more complicated. Rather than just bucking up and telling everyone to read the documentation, and understand the concepts they need to be productive, everyone keeps building random, uncoordinated things to "help", and newcomers become confused.
For example, I don't know who this operator framework is aimed at -- it's not at application developers, but at k8s component creators who write cluster-level tools, but what cluster tool writer would want to write a tool without understanding k8s at it's core? Those are the table stakes -- if I understand k8s and already understand the operator pattern (which is really just a Controller + a CRD, two essential bits of k8s), why would I use this framework?
I think if they really wanted to help, they'd produce some good documentation or a cookbook and maintain an annex of the state of the art in how to create/implement operators. But that's boring, and has no notoriety in it.
It's not that they force Kubernetes to be more complex, it's that they muddy the waters. I clearly understand that they're optional, and that they're an add-on essentially, but it might not look this way to a newcomer.
People are being encouraged to download a helm chart before they even write their first kubernetes resource definition. People might start using this Operator Framework before they implement their own operator from scratch (that's kind of the point) -- though honestly it's unlikely that they'll actually be clueless since it's for cluster operators.
> You can see these operational patterns being used together to create a fairly advanced on-prem cloud infrastructure. At times, to me, it looks like over-engineering.
well consider you wanted to have a High Available solutions that supports Blue Green / Rolling Deploys without downtime. You either built it yourself or you rely on something like k8s. It's not that much over engineering.
K8s is a lot of code, yes. But the constructs is still pretty simple. I think deploying k8s is still way easier than most other solutions out there, like all these PaaS's and Cloud solutions. Spinning up K8s is basically just using ignition/cloud-config, coreos and PXE or better iPXE. Yeah sometimes it's troublesome to upgrade a k8s version or a etcd cluster. However everything on top of k8s or even coreos itself is extremly simple to upgrade.
inb4 or our current system is using consul, haproxy, ansible and some custom built stuff to actually run our stuff. System upgrades are still done manually or trough ansible and my company plans to replace that with k8s. it's just way simpler to keep everything up-to date and run for high availability without disruption on deployments.
it's also way simpler to actually get new services/tools into production, i.e. redis/elasticsearch without needing to keep them up to date/running.
Not the parent, but I really like Nomad + Consul + Fabio (or Traefik) too. I tried learning Kubernetes but there was so much to take in all at once; I tried learning the HashiStack and I could try it out one product at a time.
It's not clear for me, if you confuse the compose/swarm development progress (which is like your ideals) and kubernetes (which afaik always was over-engineered to begin with).
Kubernetes has the huge day-1 problem, that it doesn't solve all of your problems. The hard stuff like networking and distributed storage are hook-in APIs. That's fine on Google's cloud where all the other stuff is there and was developed with these interfaces in mind, so all the endpoints are there. But most companies don't work in GCP/AWS alone. The moment you come on-premise you see that kubernetes only does 25% of what it needs to do to get the job done.
So, oyu have this tool who already lacks 75% in its original design and it tries to overcome this by adding more stuff. Then you combine this with a prematurely hyped community who just adds more stuff to solve problems that are solved, that don't need to get solved or that aren't problems, just to get their own names and logos into what's out there.
These two are patterns that make it very clear that it is impossible for Kubernetes to ever become a lean, developer friendly tool. But it's a great environment to make money already, I can tell you. And I think maybe that was the main goal from the beginning.
There's some truth and some wistful hope in your post; In my time at Google, the only thing that was anything like these "Operators" was what was developed by the MySQL SRE team, which was great but they also admitted it was a bit "round peg, square hole". There's a shared persistence layer that hasn't quite shown up yet; you need a low-latency POSIX filesystem and a throughput-heavy non-POSIX system (Chubby and GFS in the Borg world/Etcd and ??? in k8s). Not having the ability to work with persistent, shared objects is the biggest detriment to the ecosystem. S3 sorta works if you're in AWS, GCE supports Bigtable and etc
Operations is always more complex than folk expect it to be and product evolution typically reflects that. Kubernetes was simple because it couldn't miraculously teleport to do all the things ever-larger clusters require of it.
We forever rush to the limits of current technology and then blame the technology.
I think it's worth noting that Kubernetes never tried hard to impose an opinion about what belongs to the operator (as in the person running it) and what belongs to the developer. You get the box and then you work out amongst yourselves where to draw the value line.
Cloud Foundry, which came along earlier, took inspiration from Heroku and had a lot of folks of the convention-over-configuration school involved in its early days. The value line is explicitly drawn. It's the opinionated contract of `cf push` and the services API. That dev/ops contract allowed Cloud Foundry to evolve its container orchestration system through several generations of technology without developers having to know or care about the changes. From pre-Docker to post-Istio.
Disclosure: I work for Pivotal, we do Cloud Foundry stuff. But as it happens my current day job involves a lot of thinking about Kubernetes.
Must say I do miss those days when K8s was an idea that could fit in your head. The primitives were just enough back then. It was powerful developer tool for teams and we used it aggressively to accelerate our development process.
K8s has now moved beyond this and seems to me to be focussing strongly on its operational patterns. You can see these operational patterns being used together to create a fairly advanced on-prem cloud infrastructure. At times, to me, it looks like over-engineering.
Looking at the borg papers, I don't remember seeing operational primitives this advanced. The develop interface was fairly simple i.e this is my app, give me these resources, go!
I know you don't have to use this new construct but it sure does make the landscape a lot more complicated.