Hacker News new | past | comments | ask | show | jobs | submit login

AWS support for Docker and Kubernetes is great, I'm curious why you say that.

Managed Kubernetes (EKS) is really good at this point, and coupled with the ALB Ingress Controller, it beats anything else I've tried in terms of monitoring, ingress routing, etc. One little bit that is missing from EKS is selecting the size of your control plane, because the default is too small for dynamically scaling clusters running thousands of pods - you need to go through support to increase it.

Docker is more a function of the OS you choose on your instances, and it work as well on Amazon Linux as in any other. In k8s, docker is no longer assumed to be the only CNI, so lots of changes are in flight to be able to change that.

FWIW, my team is spending $400k/month in AWS right now. Suffice it to say, we use it a lot. It's been generally good, though it's hard to figure out new things due to terrible docs. Docs are plentiful, but not well integrated with each other and not kept up to date.




+1 on EKS. It's not perfect but we've had an excellent experience building a database-as-a-service on top of it. We run well over 100 clusters and are growing rapidly. Best thing is that we didn't have to get enmeshed with K8s operations and instead focused on the applications on top.


400k a MONTH?

Man I'm jealous. :)


I don't want to assume things, but in many orgs this just means "we waste 10x more than we could have without it, and equally more on multi-team developer/management workforce needed to support it, because of course everything needs scale".


k8s doesn't get you scaling, in fact, it can get in the way, but if you play your cards right, it gets you high utilization of the CPU and RAM for which you pay, more than doing something simpler via autoscaling or whatever, because it's a solution to dynamic workload distribution, which is a difficult thing to build yourself.


Oh, after trying my hand at a custom scheduler on bare-bones VMs, I dreamed of using K8s for bin-packing too, but in reality they still end up just running a bunch of Java services that are extremely poor at RAM reporting, and so really are not compatible with the sexy VPA and other such cool stuff.

So the end result is still just a bunch of nodes running with way too much "free" RAM (and hence CPU) to accommodate for poorly predictable JRE RAM consumption patterns. I once observed that what could run on a single node with 16 cores (according to GKE's own cost reporting metrics) ends up running on a cluster of 512 (!) vCPUs. So it's more than a 10x waste, just purely to avoid running OOM etc.


Is that k8's that takes up to much RAM?


No, the k8s "kubelet", which runs on each worker node, doesn't use much RAM. Java services have notoriously spiky memory usage, so you end up provisioning much more RAM than you need in the average case to be able to support the spikes, so you underutilize RAM in the average case. The previous poster is describing an issue where they overprovisioned RAM heavily, and so, ended up using a lot of nodes due to the way memory requests and limits are managed in k8s.

You fix this by enabling swap and and allocating pods to nodes based on their common memory usage, and accept that your worker node will slow down when some Java process wants all the RAM.


Tbh, k8s "system" namespaces also consume quite a bit (particulary if you wanna run a minimal system) - at least 0.5 vCPU on each node and something like 0.5-1GB of RAM. This is only important for the smallest systems, but still is a hindrance to K8s adoption for such projects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: