Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Getting Kubernetes up and running isn't really the issue anymore, that's pretty easy to do. The tricky part is long term maintenance and storage.

This times 100. Deploying basic clusters is easy. Keeping a test/dev-cluster running for a while? Sure. Keeping production clusters running (TLS cert TTLs expiring, anyone?), upgrading to new K8s versions, proper monitoring (the whole stack, not just your app or the control-plane), provisioning (local) storage,... is where difficulties lie.



I’m working on this right now. My theory is that having every cluster object defined in git (but with clever use of third party helm charts to reduce maintenance burden) is the way to go.

Our cluster configuration is public[1] and I’m almost done with a blog post going over all the different choices you can make wrt the surrounding monitoring/etc infrastructure on a Kubernetes cluster.

[1] https://github.com/ocf/kubernetes


My comment above did not come out of the blue, but based on real-world experience ;-) You may be interested in our MetalK8s project [1] which seems related to yours.

[1] https://github.com/scality/metalk8s


This looks really cool. Would you include dev apps in the mono repo too? Say you have 40 services by 20 different teams.


If I already have an production cluster up and running for a while, how can I get it git tracked now?

Another question: how about things like autoscaler which automatically edits numbers in deployments, how to git track that?


first of all with k3s keeping a production cluster running is still pretty easy.

second you should always be ready to start from scratch, which is also pretty simple, because of terraform.

a lot of people are scared of k8s but they did not even try. they prefer to maintain their scary ansible/puppet whatever script that works only half as good as k8s.


> first of all with k3s keeping a production cluster running is still pretty easy.

Fair enough. I'll admit I have no direct experience with K3s. There are, however, many K8s deployment systems out there which I would not consider 'production-ready' at all even though they're marketed that way.

> second you should always be ready to start from scratch, which is also pretty simple, because of terraform.

That may all be possible if your environment can be spawned using Terraform (e.g., cloud/VMWare environments and similar). If your deployment targets physical servers in enterprise datacenters where you don't even fully own the OS layer, Terraform won't bring much.

> a lot of people are scared of k8s but they did not even try. they prefer to maintain their scary ansible/puppet whatever script that works only half as good as k8s.

We've been deploying and running K8s as part of our on-premises storage product offering since 2018, so 'scared' and 'didn't try' seems not applicable to my experience. Yes, our solution (MetalK8s, it's open source, PTAL) uses a tech 'half as good' as K8s (SaltStack, not Ansible or Puppet) because you need something to deploy/lifecycle said cluster. Once the basic K8s cluster is up, we run as much as possible 'inside' K8s. But IMO K8s is only a partial replacement for technologies like SaltStack and Ansible, i.e., in environments where you can somehow 'get' a (managed) K8s cluster out of thin air.


I've been using this terraform provider quite a lot lately. It has made it a cinch to templatize a full manifest and pass data to the template for a deploy. We now have a fully reproducible base EKS cluster deploy done with working cert-manager/letsencrypt, nginx ingress, weave-net, fluentd logging to elasticsearch service, etc. Our application-specific code lives in a different repo and deploys things using YTT. It's so much more elegant than our old method of copying and pasting manifests and crossing our fingers and hoping the cluster didn't fall down. A full migration to a new cluster and deploy of a whole new application stack takes under an hour now.

https://github.com/gavinbunney/terraform-provider-kubectl


This is where Uffizzi is going. Phase 1 they started with their own custom controller and they are managing clusters for customers who pay for a portion of that cluster. In Phase 2 they are opening up their control plane to enterprise customers to solve the “times 100” management issue listed above.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: