Hacker News new | past | comments | ask | show | jobs | submit login

kops creates an autoscaling group for the worker nodes as part of the cluster creation process. Adding the cluster autoscaler is simple as deploying the autoscaler (as mentioned in your linked docs) pointing to the correct ASG. The IAM permissions for the autoscaler can be added with `kops edit cluster $CLUSTER`.



Thanks! I have been using kubeadm for my "cluster" with some home-made ansible playbooks so I hadn't just gone ahead and tried it out yet. Good to know I'm headed down a path that others have already been down.


I've spent a solid amount of time reading docs and looking in to this. It sounds like this might not be an issue for you, but the major gotcha is k8s does not support rescheduling pods when new nodes join the cluster yet to re-balance things (there was some stuff in proposal phase to address it though).

So, if your workload is made up of lots of short lived ephemeral stuff you are good to go, but otherwise you may have to manually step into rebalance stuff to new nodes.

The autoscaler addon might have addressed some of this, but I'm not seeing anything obvious after a cursory overview of the docs.


I think I heard about that. Does sound like a pretty big gotcha, but I think you're right that it won't be a problem for my use case. Hopefully I can spend some time this weekend to try it out!


The autoscaler will add minions when there is a pod that cannot be scheduled. It doesn't help in balancing existing things, but at least the new pod should be able to go there.


I saw that and was confused, I thought that would work out of the box with k8s since (I think?) it continually tries to keep scheduling pods; I must be wrong though.

If balancing gets implemented, am I correct that it would probably happen in k8s core?


I think I wouldn't assume that, myself; at least not after my stint in VietMWare (PTSD from a prior vSphere/vSAN experience)

We had VMTurbo/Turbonomic and VMWare's internal DRS that would sometimes compete against each other to decide where VMs should be scheduled.

You could handle balancing from inside of k8s core, or not. All you need is to evict the pod on the over-provisioned node, and to arrange for provisioning of the replacement node before the original pod is fully evicted. It should be the same for Kubernetes, if I had to guess.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: