Been waiting for this for... 7 months now since its announcement at ReInvent?
My initial impressions are very negative. What even is the point of this? The clusters themselves do nothing, they're just a control plane you pay $144/month for. You need to add nodes from CloudFormation? Is there any integration at all with CodeBuild/CodeDeploy/CodePipeline? No mention anywhere of Ingress... I sure hope that's built in, but from what I'm reading it isn't? What was Amazon thinking by releasing this so unfinished?
This I my impression as well. It’s the thinnest possible layer of service integration with what basically amounts to demo templates for everything else. This is NOT a managed service. Nothing close. I expected something much more self contained. Anyone looking to run K8s at scale on AWS May as well just do it themselves. This doesn’t buy you anything except locking into what versions AWS can manage to support.
The rumor at Kubecon (in December, right after re:Invent) was that at the time there were only five people working on EKS, so you shouldn't have expected much. Thus, even if the team grew in the meantime, and it must have, you can't expect a solid product before at least the end of 2018. Waiting one year from announcement before adopting a brand new AWS product seems to be a reasonable rule of thumb.
Do you think management nodes are free? Master nodes need to run etcd+kube api. These master nodes are multi region available with HA configured, which will be expensive even if you use smaller instances yourself. Google might be free but if you are already in AWS and do not want to manage Kubernetes masters at all and treat your worker nodes like cattle, it is a very nice and enough solution.
What integration do you need? The good thing about EKS is that it is CNCF Conformant Kubernetes, meaning anything works with upstream Kubernetes works here, including ingress controllers that already work with both ALB and ELB+Nginx.
I am not defending Amazon, most of their managed services suck and very slow, but both preview and GA experience of EKS was great and enough for me.
> Do you think management nodes are free? ... Google['s management nodes] might be free
Indeed they are. Why do you think that is?
It could be that they're a great big loss-leader for GKE, or that the costs of GKE management nodes are amortized into the per-cluster-node costs. But neither of these seem to be true. (The pricing of GKE management nodes before they made them free was negligible; and GKE nodes don't cost any more than equivalent GCE nodes.)
The real answer, I think, is that K8s management nodes really are just not all that big a deal to run. Certainly not $144/mo of a big deal.
Isn't it funny how Google announces free GKE stuff the day before EKS was announced AND made generally available? :-) Like Jobs issuing a press release that the App Store had hit 3B downloads the morning of the Nexus One launch.
I'd speculate that Google did it on purpose the first time (everyone knew there had to be an announcement coming, and the date of re:Invent was well known in advance.)
Given what some are saying in this thread here today, I would not be the least bit surprised to learn that Google's announcement this time actually forced Amazon's hand, and that the folks on this afternoon's Amazon's twitch stream actually found out yesterday that they'd be making the announcement today.
Yeah, it was safe to assume they would announce something related at re:Invent, given that they had joined the CNCF a few months before, after simply ignoring Kubernetes for years.
I would agree with you, but no one (honestly) knew when EKS was being announced at GA, including customers we talked to. I think this was just a coincidence.
Last time I checked, Google's public renewable energy footprint was 2x Amazon's, as a proximate for the actual footprint, so it's not that crazy to imagine that there are plenty of slack resources to run small VMs or even Borg tasks for these masters.
AWS is 10x larger than GCP so power and compute capacity are really not the issue here, especially for a product that just launched. There's also no reason AWS couldn't adopt the same lightweight kubernetes master processes that GKE uses after all this time.
AWS might be larger than GCP, but the rest of Google (Search, Adwords, Adsense, YT, Gmail, Maps, Apps, etc.) is larger than all of AWS+Amazon and that's the kind of resources over which Google can amortize things.
Google has been running almost everything, GCP included, under Borg and can squeeze mixed workloads more tightly than the industry average. My understanding is that Amazon's technology is less uniform, but perhaps they have improved things and thus increased their utilization. You are basically printing money when it goes up by even just 1%. Or you can afford to hand out freebies like Kubernetes masters, since they're lost in the noise.
When electricity is the sole marginal cost of keeping a node powered up from your pool of available machines that can be powered on and off on-demand, these particular words actually mean a lot.
I think the point is that GCP can afford to keep some nodes online even when they don't belong to anyone and just eat the cost. (Eg. those renewable resources like wind and the sun just don't cost anything, so why not spend the energy that is generated from them.)
I'm sure I'm not giving AWS enough credit. Seeing that Google pretty much invented Kubernetes, their cost to come 'up to speed' enough to provide a managed version of it is effectively nil. Their renewable energy is not the only source of so-called "extra capacity," either... they honestly should have had no trouble staffing a team to build out and promote GKE without siphoning off much more than a nominal amount of developer capacity from other offerings that may need it as well.
If it's true that when AWS announced they'd be offering EKS last November, they only had a team of 5 allocated to it, then I guess they deserve a lot of credit for the progress they've made in about 8 months.
That cost to AWS is non-zero (I'm also pretty sure the team has grown and is larger than 5 devs now) and they have every reasonable basis to attempt to recoup it. I don't think many AWS customers will disagree. The adopters who have known they need it for a year or more, many who are happily locked into the ecosystem already, will certainly be lining up to pay for this now.
A more accurate point is not necessarily that Google will keep some more nodes up, but that it can fit small workloads like a Kubernetes master into existing nodes that run other stuff and have bits of spare capacity. Borg's resource estimation is pretty good at squeezing things.
I'm thinking specifically of masters for smaller clusters, where the fixed overhead costs of the control plane are relatively high compared to the node count. On larger clusters, the masters will be beefier, sure, but your customer will be paying for enough nodes to make up for their cost and the fixed overhead is more "diluted".
Speaking of capacity, I just got this error trying to build an EKS cluster.
UnsupportedAvailabilityZoneException: Cannot create cluster because us-east-1b, the targeted availability zone, does not currently have sufficient capacity to support the cluster. Retry and choose from these availability zones: us-east-1a, us-east-1c, us-east-1d
But yeah, sure, keep telling yourself the AWS doesn't have a problem with power and compute capacity. Or maybe it's just poor product design?
The availability zones in us-east-1 have unique legacy problems related to companies that would select one AZ and deploy all their huge amount of infrastructure to it. IIRC, Netflix did this, for one example.
Every company that did this was picking us-east-1a specifically, because it was the first AZ in the list. It added up, and now the us-east-1a datacenter can't add capacity fast enough to support the growth of all the companies "stuck" on it (because their existing infra is already deployed there, and they still need to grow.) Effectively, us-east-1a is "full."
Which means, of course, that companies would find out from friends or from AWS errors that us-east-1a is full, and so choose us-east-1b...
AWS fixed this a few years in by making the AZs in a region randomized respective to an AWS root account (so my AWS account's us-east-1a is your AWS account's us-east-1{b,c,d}.) So new companies were better "load balanced" onto the AZs of a region.
But, because those companies still exist and are still growing on the specific DC that was us-east-1a (and to a lesser extent us-east-1b), those DCs are still full. So, for any given AWS account, one-and-a-half of the AZs in us-east-1 will be hard to deploy anything to.
Suggestions:
• for greenfield projects, just use us-east-2.
• for projects that need low-latency links to things that are already deployed within us-east-1, run some reservation actions to see how much capacity you can grab within each us-east-1 AZ, which will let you determine which of your AZs map to the DCs previously known as us-east-1a and us-east-1b. Then, use the other ones. (Or, if you're paying for business-level support, just ask AWS staff which AZs those are for your account; they'll probably be happy to tell you.)
Why wouldn't Amazon do the same then when they have a higher market cap than Google and a known reputation to loss-lead an entire vertical to the tune of 100s of millions just to put a competitor out of business?
This sub-thread was about AWS not having the same resource efficiency as Google, and Google also subsidizing with AdSense which is why GKE is free. So in that sense there is no reason why AWS couldnt also make them free.
Not every deployment is $1000s/month. I dont care about EKS, but it will hurt adoption if every single cluster requires a $144/month fee for the master when there are offerings from GCP and Azure that are free and far more polished.
> Not every deployment is $1000s/month. I dont care about EKS, but it will hurt adoption if every single cluster requires a $144/month fee for the master when there are offerings from GCP and Azure that are free and far more polished.
Maybe that's the reasoning behind it. If the service isn't really up to standard yet, it definitely makes sense. Smaller players, when they see an unfinished product, will not come back.
Keep the small clients away until the service is more polished by price barring, use the bigger clients who can afford the support, iterate and improve.
EKS may be appealing for a shop that has AWS expertise, automation, and monitoring set-up already and that doesn't necessarily want to manage a completely different stack for k8s. EKS has some form of integration with IAM, EBS, ALB, Route53, CloudWatch, etc.
As to 'unfinished', this is the usual model for AWS. New services are initially very simple and rich features and integrations are added over time. It's been that way for ten years. ECS didn't even support running multiple container ports per instances when it was released!
Just guessing, I think they were thinking, "We'd better put something out there in GA before Azure/AKS does. We're already late to the party." Maybe they actually don't think of that at all... I'm very skeptical of that.
I find Amazon’s attitude towards the whole Kubernetes lukewarm at best. It seems they really wanted ECS to be the killer container service on AWS but when k8s took over the mindshare they reluctantly added Fargate and EKS at slow speed and with an underwhelming product.
We worked with both AWS container solutions and GKE and find GKE far superior. We had to build Skycap as deployment solution for our applications on top of it but the end result is an amazingly simple system delivering HA and robustness we never could have imagined with any other solution as easily.
Amazon is extremely data driven. If their attitude is lukewarm at best then you might consider the market is lukewarm at best.
People take for granted that the value of k8s has been proven. It has not. The value of containerizing your application is clear and has been realized over and over again. The jury is still out on k8s.
Not really. There aren't any real alternatives other than the dying Docker Swarm and some overlap with Nomad and Mesos. The "war" is over, Kubernetes has come out on top and is rapidly developing both as a platform and as offerings from Google's GKE and Azure AKS. Almost all Kubernetes deployments are using a managed solution of some sort, including OpenShift and the rest, so AWS is falling behind here badly with their current offerings.
It's kind of a cynical position, actually, because embedded in it is the underlying contention that all of the attention the platform has gotten since 2015, the startups it has spawned, the resources being put into it by every major systems tech vendor, the doubling in size of kubecon every year for the last three years, the hundreds of how-to and in-depth articles that have been written, have all resulted from some sort of hype and anticipation of future benefits. There are thousands of nodes running production workloads right now. I've been running it in production since 2016. The value is certainly apparent to our team. Yes you can stitch a lot of it together with shell scripts and systemd or swarm or whatever. But why would you?
Not really, the value of containers that most people have seen is that it's just a more portable package. It's easier to move between environments in a consistent way. If I do that with fancy orchestration, shell scripts, or configuration management largely doesn't matter. I can get most of the advantages with shell scripts.
As someone who has worked in an environment where thousands of machines were being deployed to, this is ridiculous. Orchestration is an area where good tooling makes an absolutely massive difference. Shell scripts are not a good tool for observability or handling rollbacks.
I didn't say orchestration provides no value. I've spent the last 15 years building orchestration systems, I believe in them. I'm merely saying the most value of containers is achieved without requiring the adoption of orchestration. What I mean by that is if you take your existing means of running non-containerized apps, and change nothing except putting your app in a container, you will see a large benefit. Now changing your existing means of deploying application and swapping that out to an orchestration engine such as k8s may or may not benefit you. If you have 1000 servers, you will mostly likely see benefit. If you have 20, you might not.
k8s or orchestration/clustering is not the only way to run containers. Amazon knows that. If I was to make a bet I say that Fargate or Lambda have much better chance of being a major money maker for AWS.
I seriously doubt that. 1000+ servers is a lot. The number of organisations worldwide running that many servers in any sort of coordination must be pretty low. Services (or "pods") sure, but actual servers? Can't be more than a few hundred companies, surely.
One would also think that by 1000-ish servers it's starting to make a lot of financial sense to move out of AWS anyway.
Amusingly, since there are only 200 comments, his statement is true simply by the luck of saying average: multiple companies that comment here are in the tens to hundreds of thousands plus servers range, bringing up the average for everyone :). That said, I'd be deeply surprised if the median was breaking 100. 100 dual-socket servers gets you a lot of compute these days!
Deploying containers is shell-script-easy as long as nothing goes wrong. Ever. I know you know this. I'm a little confused why you'd make a claim like this.
It's really true for most users. I've seen it over and over again. Working in SV we lose sight of just how simple many peoples applications are. But I'm not saying just start a server and call it good. Use auto scaling groups, and some monitoring from your cloud provider. It's typically sufficient. Some of the most successful container deployments I've seen just use terraform + AWS services (EBS, ELB, ASGs). Its simple and gets the job done, not a lot of moving parts.
Do you have any actual data related to this claim or am I just supposed to assume that it's true by inferring things from Amazon's cloud offerings?
Edit: Sorry for the aggressive tone, I overreacted a bit because it seems like every single day there's someone pushing an anti-Kubernetes agenda on HN. Often they don't even use it, they just glanced at the docs and said "it's too complicated", or something.
Clearly, this isn’t you since you’re the cofounder of Rancher.
I don't mean to push an anti-k8s agenda. What I'm pointing out is that people have conflated the value of containers with the value of orchestration and clustering. They don't need to go hand in hand. Containers without orchestration has been proven to save money. It's hard at the moment to prove the real value of k8s. It's too early and there isn't enough adoption. When you look at most success cases of k8s they often can achieve the same benefits using an plain old docker and some auto scaling groups.
Makes sense. I suppose in my case it’s ease of service to service communication + deployment that’s the primary benefit. Consul could easily handle the first, but deployment could be a bit trickier. Ultimately I think if GKE didn’t exist I very likely would have ended up using Nomad and Consul for simplicity’s sake. To me it seems some sort of scheduler is necessary if you want to have an easy way to control the horizontal scalability and availability of your individual containers and not just your hosts. Unfortunately setting that up isn’t nearly as well documented as setting up a k8s cluster right now, but IMO the end result is a lot easier to comprehend. The lack of end to end documentation is really what kept me away from adopting Hashicorp’s tools instead, so hopefully that will improve and k8s can have some real competition.
All of this is in the context of a microservices architecture, though. Something I generally wouldn’t choose to do unless there were good reasons for it. Defaulting to a monolith is still the safer choice IMO, and if you are deploying existing monoliths, or a small number of services, then it’s very possible that you don’t have a need for an orchestrator at all.
Certainly at a recent AWS summit I don't think I really heard a reference to it. ECS and Fargate certainly, but was surprised about how quiet it was with regard to news about EKS.
> You pay $0.20 per hour for the EKS Control Plane, and usual EC2, EBS, and Load Balancing prices for resources that run in your account
Objectively that's not bad for HA masters in separate AZs, but I think for those who have been using Kubernetes on the Google cloud it's certainly going to have a hard time competing with "you don't pay anything for HA masters at all."
> ingress
from the Twitch stream, it sounds like they have not worked out ingress with ALBs. No mention of Ingress on the announcement page. Twitch stream is here[1]. (it's over now) [2]
This is going to be super expensive to use in the near term.
Just now Nishi Davidson has just mentioned ingress/ALB is a focus of the sig-aws, so hopefully we can expect another announcement soon.
Not only recently, but actually this was the cost until yesterday for GKE with HA masters distributed across AZs[1]!
I only heard that this announcement from AWS was likely coming this morning, over here in the thread about upgrading GKE clusters[2]. Given how long we've waited since the announcement at November's re:Invent, there's honestly not a lot that seems terribly rushed about this news, but I bet that AWS really would have liked to have Ingress controllers that are integrated with ALB ready for this announcement to go with their CNI plugins.
"Why am I waiting for Amazon to get out of preview when Google's been giving it away this whole time?" For AWS customers that aren't locked in, that seems like a reasonable train of thought.
The Google GKE offering can operate with a single g1-small worker.
That's $0.0257/hr or $0.0070/hr preemptible. $5/mo minimum*
This is not even the same ballpark.
*(to be fair, the $5/mo minimum does not include a private network and a load balancer, and you will almost definitely need both in order to actually do anything with a cluster.)
You're thinking like a human. There are plenty of organizations where $5/mo and $150/mo are essentially indistinguishable and would not affect decision making in any way.
I second this. This means you don't have to setup and manage those master nodes. AWS will do this for you. A developer fixing an issue with your self-managed master nodes will be much more expensive than 150$/month.
As an individual this may look expensive, but as a company this is much cheaper than paying the loan of a developer.
Yes it is nice for experiments, but if you already invest thousands of dollars a month for cloud resources, an additional 150$ are for sure not a problem.
That's $150/mo before you even launch a single worker tho.
I don't know about you, but I'd personally rather pay the $5 if I don't need a serious cluster yet. How much in actual absolute dollars do you think this is going to cost you, just in order to figure out exactly how it works, and what other AWS widgets you'll also need to pay for in order to use it effectively?
I am a human, basically looking for a stepping stone on some actual cloud to land on that is comparable to Minikube, and this sure isn't it.
I'm pretty clearly in the minority with my opinion here, and I recognize that.
Amazon has made a substantial investment in Kubernetes here and that is nothing to shrug at. They are entitled (and justified) to recoup it. It's fair to say that K8S is not for lightweights, too.
AWS provides a fairly generous free tier and they are also justified to extract some payment from every user of their services that derives some non-negligible value. If you're using Kubernetes and you're not getting any value out of it, I think it's also fair to say that you're doing it wrong.
I serve a couple small websites from a very very small GKE cluster that costs well under $100/month. I don't worry about OS updates or Kubernetes upgrades. I don't worry about it going down.
You're not in the minority. It's well understood that there are amateurs trying to use the expensive clouds and shave off every penny, while the main customers are enterprise customers who couldn't notice a thousand dollars more or less.
It's just pointless for small users to argue about AWS costs. The service is not meant for them and it cannot fulfill their requirements.
You're obviously no slouch, and I respect your opinion, but I think Amazon is all about the long tail and without speculating too much about what they are thinking, I'll say they might disagree. They are obviously smart to go for the big money dollars first, but everything I know about "Long Tail" says you're wrong.
I absolutely agree with everything you said, if you only added "at this time" to the end of that last sentence.
What long tail do you mean? I would say the small tech companies with tens of developers and tens of servers are the long tail for AWS.
The service is too expensive and too complicated to be used by very small companies or individual users. There are also more appropriate competitors in that space.
You told me you thought I'm not in the minority, and others have come forward to agree. I think we are the long tail.
It is exactly as you say, companies that don't want to spend more than they absolutely need to on infrastructure. Shaving pennies to save money. We are each too small to make any significant money on us, taken individually. That's what makes us the long tail.
Do you think in 6 months, you will be able to get a non-HA Kubernetes master on Amazon's free tier? Almost surely within 12 months. At that time, they will have addressed the long tail. Today's announcement is not for us. It's for large enterprise customers that settled on Kubernetes (and 57% of Kubernetes is already on AWS, so many of them are likely not new customers.) We're both right, from opposite perspectives.
Sorry, there might be a misunderstanding. I meant that you were not in the minority, in terms of user count. There are people trying out AWS/Google, riding on the free tier, or just running a small site for the experience. People spending less than a hundred dollars a month could very well be the majority of the user count.
It doesn't mean that they bring any significant revenues or that the distribution follows a long tail. I would actually bet that amateur users are not forming a long tail.
And plenty of startups that choose between AWS and GCP at a point when those numbers are plenty distinguishable, even though later on they'll become those organizations.
You can afford ~5 preemptible instances for the cost of one full-time instance. I think you are severely overestimating the harm that is done by preempting/terminating instances in a prod Kubernetes cluster that is capably configured for full HA and distributed across zones. That's what it's made for – nodes are as disposable as pods. They may come and go as they please, and they'll be replaced as needed by the self-healing nature of the cluster and the scaling group.
(Why save a few dollars when you can get extra capacity instead? Especially when you have $100k of someone else's money to burn... so, for real though, anyone who has actually tried this configuration can chime in and confirm that it works as well as I imagine.)
I'm pretty sure that periodically shutting down some nodes is a boon for cluster utilization, too. One of the things that Kubernetes does not do on its own, is load rebalancing. You can configure the autoscaler to recognize when nodes are overprovisioned, and let it drain a few and shut them down... or you can let the preemptible nature of (some/all of your) nodes do it for you.
(Why not both? Some nodes are getting killed, or you're paying for resources you don't use, so... one way or another.)
"No experience" is a stretch. I'll concede, no significant Prod experience to speak of.
I got introduced to the Kubernetes ecosystem when Deis Workflow was rebuilt for it. Today, I'm a core contributor on the team that is building the fork of that project.
(It is a side thing for me. I'm very interested but I'm not spending hundreds on infrastructure every month. I have an infrastructure team at $dayJob, and they are not doing K8S at all, if you are having a hard time understanding how exactly I got here.)
It sounds like you must have taken some investment to qualify. But I believe my friend received the credit, and I have no idea if his business has received any investments or participates in any accelerator program.
It is a program (several different programs actually), you apply for it. This looks like a good place to start.[2]
If you don't think you qualify after reading that, might be worth talking to Arun Gupta anyway. He is the Principal Open Source Technologist. He was all over the Reddit thread answering questions about EKS today.[3] Super classy.
It looks like they offer $1000-$100000 depending on how fast you want to spend it and what stage you're at.[4] "AWS Activate - Portfolio Plus" is the program talked about, that offers $100k credit that expires in a year. If you don't qualify for that, you probably qualify for either the $1000 or the $15000.
You can set up the nginx controller with NodePorts to get rid of the load balancer if it's just a side-project cluster. (Otherwise load balancers will be the majority of your cost)
I look at this more as a product catering to existing AWS customers (and not have them move to GC) rather than trying to lure GC customers over to AWS.
When you deploy Kubernetes "add-ons" like Helm's tillerd or https://github.com/jetstack/cert-manager/, the active containers of those get deployed to the management node, no?
Yeah, none of the rest of the big three are asking you to pay anything for those masters at this point, though. "They're your nodes, you paid for them" seems like a reasonable position to take here.
> "They're your nodes, you paid for them" seems like a reasonable position to take here.
If that were a reasonable position, you'd think AWS RDS and Google Cloud SQL would give you superuser access to your database instances to do things like installing Postgres extensions.
It seems a lot of people are happy to pay for instances they can't even SSH into. :/
I don't even want to SSH to instances that I control and have provisioned, why would I want to SSH into instances managed by AWS?
Unless the point of your server is to provide SSH (you are using it as a development box, maybe?), having to SSH means that you are lacking in the tooling department.
At work we are guilty of that. We are actively trying to improve on this.
In this case, it's not so much "SSH" as the ability to install files, as root, onto the server. For RDS/Cloud SQL, the inability to do that both restricts you from installing your own extensions; and restricts you from being able to use the Postgres COPY command with local/network-mounted (rather than network-streamed-to-STDIN) CSV files, majorly increasing the overhead of the operation and preventing parallelism.
AWS reacted to Google's (GCP's) success (Kubernetes), by trying to build a competitor, leveraging its market leadership position. A strategy which was very unlikely to succeed, from the get go.
Unfortunately for AWS, current market domination doesn't help much in this case. It cannot be solved by yet another two-pizzas team.
I believe that AWS is trying to fight (or downplay) the scenario in which, in a few years, when a lot of containerized workloads will be in production, GCP will be a force to deal with.
That's it. Plain and simple. My 0.02.
(disclaimer: I worked at AWS from 2008 to 2014 as tech evangelist, and I spearheaded the VMware+GCP partnership in 2015-2016 when I was vCloud Air's CTO at VMware - opinions here are my own, and are not based on any confidential information).
(second disclaimer: if you think the first disclaimer is not necessary here, you probably haven't worked much in large corporations, or at least didn't experience or witness the same things that I did).
About your second disclaimer - are you worried someone would try to punish you for this comment, or are you trying to say that generally most commentators have undisclosed conflicts of interest?
(I work at Amazon but not AWS, opinions my own but geez I’m not gonna type that every time)
"You had an undisclosed agenda this whole time? How could you! I trusted you, anonymous stranger on the internet! This is an injustice that can never be forgiven!!!"
Get used to hearing this! Not the parent poster, but I've been on the receiving end of that.
This is pretty much why I sign almost all of my posts.
Disclosure: I work for Pivotal. My agenda is that I work there. I exchange my labour for financial consideration. I am partly motivated by that consideration. Pivotal. The company is called Pivotal.
I've been so excited for experimenting with EKS ever since the announcement but this offering looks very underwhelming.
With kops [0] I can spin up a production cluster on AWS quickly and have just as much functionality (if not more control) without paying Amazon ~$150/mo for the pleasure (per cluster!). It doesn't really seem to be "managed" either.
Maybe now's the time to really start to look at GCP/GKE. I've used them for some GitLab CI stuff in the past but never invested too much time into really seeing how the transition from AWS to GCP is.
When setting up production clusters via kops on AWS I opt for c4.large instances for the master, which with a HA quorum costs $250/month. If you use the default master instance type in kops, m3.medium, it's around $174/month. I fail to see the problem with AWS charging $150/month for a fully managed alternative.
For those trying to spin it up while the docs aren't available, I ran into some issues with the IAM role.
Basically, create a new role with a trust relationship to `eks.amazonaws.com`, with the AmazonEKSClusterPolicy and AmazonEKSServicePolicy attached to it, and you should be good.
Thank you AWS, for having consistent naming schemes.
Not as generally available as I thought, and for the looks of it, feels just as "hacky" as the preview with respect to the user experience. For some reason, I was expecting more from them.
any frontend eng from EKS, you should look at the chrome console.
"""
Warning: It looks like you're using a minified copy of the development build of React. When deploying React apps to production, make sure to use the production build which skips development warnings and is faster. See https://fb.me/react-minification for more details.
"""
ELI5: do you want to manage EC2 nodes and run containers on them? Use ECS (until you outgrow it. Why not... it is the cheapest of the three.)
Do you want to run Kubernetes in production, but afraid to do it yourself? (You probably know already who you are...) Container clusters composed of EC2 nodes, but joining the rest of the civilized world whose dev team thinks in the abstractions of K8S? Use EKS, today's announcement is for you.
Do you absolutely not want to manage EC2 nodes, but want to run containers? Use Fargate. Coming soon, Fargate for EKS will reunify the two threads.
No ECS is definitely not going away. Having multiple ways of doing something is not strange for Amazon, in fact its what you should expect from Amazon. Just look at how many different database options there are!
ECS will remain as the free option that is great for small workloads, and companies that are going all in on AWS. It has the tightest integrations with other AWS services.
EKS is a good option for companies that don't mind spending a little more to run the open source version so that they have portability or a consistent environment between their on-premise and the cloud. While EKS does integrate with other AWS services it has to do so through plugins and controllers that aren't quite as tightly coupled, since EKS is just the same open source Kubernetes that you can run anywhere.
The use and functionality you get from ECS and EKS can be quite different. ECS is tightly AWS-integrated, EKS is designed to be cloud-independent. You can pick between ECS and EKS based on your needs and use Fargate as the interface to either.
It probably won’t go away. They didn’t eliminate Cloud Search when they started offering managed Elastic Search, or SQS when they started offering managed Active MQ. However AWS will likely put resources into EKS vs ECS.
Because the open source kubernetes components have thousands of developers contributing to the code, as well a whole ecosystem of components built to kubernetes’ api.
My initial impressions are very negative. What even is the point of this? The clusters themselves do nothing, they're just a control plane you pay $144/month for. You need to add nodes from CloudFormation? Is there any integration at all with CodeBuild/CodeDeploy/CodePipeline? No mention anywhere of Ingress... I sure hope that's built in, but from what I'm reading it isn't? What was Amazon thinking by releasing this so unfinished?