Hacker News new | past | comments | ask | show | jobs | submit login

Last time I checked, Google's public renewable energy footprint was 2x Amazon's, as a proximate for the actual footprint, so it's not that crazy to imagine that there are plenty of slack resources to run small VMs or even Borg tasks for these masters.



AWS is 10x larger than GCP so power and compute capacity are really not the issue here, especially for a product that just launched. There's also no reason AWS couldn't adopt the same lightweight kubernetes master processes that GKE uses after all this time.


AWS might be larger than GCP, but the rest of Google (Search, Adwords, Adsense, YT, Gmail, Maps, Apps, etc.) is larger than all of AWS+Amazon and that's the kind of resources over which Google can amortize things.

https://electrek.co/2017/11/30/google-is-officially-100-sun-...

Google has been running almost everything, GCP included, under Borg and can squeeze mixed workloads more tightly than the industry average. My understanding is that Amazon's technology is less uniform, but perhaps they have improved things and thus increased their utilization. You are basically printing money when it goes up by even just 1%. Or you can afford to hand out freebies like Kubernetes masters, since they're lost in the noise.


> public renewable energy footprint

When electricity is the sole marginal cost of keeping a node powered up from your pool of available machines that can be powered on and off on-demand, these particular words actually mean a lot.

I think the point is that GCP can afford to keep some nodes online even when they don't belong to anyone and just eat the cost. (Eg. those renewable resources like wind and the sun just don't cost anything, so why not spend the energy that is generated from them.)

I'm sure I'm not giving AWS enough credit. Seeing that Google pretty much invented Kubernetes, their cost to come 'up to speed' enough to provide a managed version of it is effectively nil. Their renewable energy is not the only source of so-called "extra capacity," either... they honestly should have had no trouble staffing a team to build out and promote GKE without siphoning off much more than a nominal amount of developer capacity from other offerings that may need it as well.

If it's true that when AWS announced they'd be offering EKS last November, they only had a team of 5 allocated to it, then I guess they deserve a lot of credit for the progress they've made in about 8 months.

That cost to AWS is non-zero (I'm also pretty sure the team has grown and is larger than 5 devs now) and they have every reasonable basis to attempt to recoup it. I don't think many AWS customers will disagree. The adopters who have known they need it for a year or more, many who are happily locked into the ecosystem already, will certainly be lining up to pay for this now.


A more accurate point is not necessarily that Google will keep some more nodes up, but that it can fit small workloads like a Kubernetes master into existing nodes that run other stuff and have bits of spare capacity. Borg's resource estimation is pretty good at squeezing things.

I'm thinking specifically of masters for smaller clusters, where the fixed overhead costs of the control plane are relatively high compared to the node count. On larger clusters, the masters will be beefier, sure, but your customer will be paying for enough nodes to make up for their cost and the fixed overhead is more "diluted".


Speaking of capacity, I just got this error trying to build an EKS cluster.

UnsupportedAvailabilityZoneException: Cannot create cluster because us-east-1b, the targeted availability zone, does not currently have sufficient capacity to support the cluster. Retry and choose from these availability zones: us-east-1a, us-east-1c, us-east-1d

But yeah, sure, keep telling yourself the AWS doesn't have a problem with power and compute capacity. Or maybe it's just poor product design?

https://docs.aws.amazon.com/eks/latest/userguide/troubleshoo...

They should've called it Generally Available(ish)


The availability zones in us-east-1 have unique legacy problems related to companies that would select one AZ and deploy all their huge amount of infrastructure to it. IIRC, Netflix did this, for one example.

Every company that did this was picking us-east-1a specifically, because it was the first AZ in the list. It added up, and now the us-east-1a datacenter can't add capacity fast enough to support the growth of all the companies "stuck" on it (because their existing infra is already deployed there, and they still need to grow.) Effectively, us-east-1a is "full."

Which means, of course, that companies would find out from friends or from AWS errors that us-east-1a is full, and so choose us-east-1b...

AWS fixed this a few years in by making the AZs in a region randomized respective to an AWS root account (so my AWS account's us-east-1a is your AWS account's us-east-1{b,c,d}.) So new companies were better "load balanced" onto the AZs of a region.

But, because those companies still exist and are still growing on the specific DC that was us-east-1a (and to a lesser extent us-east-1b), those DCs are still full. So, for any given AWS account, one-and-a-half of the AZs in us-east-1 will be hard to deploy anything to.

Suggestions:

• for greenfield projects, just use us-east-2.

• for projects that need low-latency links to things that are already deployed within us-east-1, run some reservation actions to see how much capacity you can grab within each us-east-1 AZ, which will let you determine which of your AZs map to the DCs previously known as us-east-1a and us-east-1b. Then, use the other ones. (Or, if you're paying for business-level support, just ask AWS staff which AZs those are for your account; they'll probably be happy to tell you.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: