DO's kubernetes release is an an example of why I am a big fan. As a sole developer, I can't afford high technical debt, but DO packages tech in a way I can manage. I hope they keep on and wish other services (here's looking at you AWS) would package their services as well.
I am a mechanical engineer who dabbles in web development from time to time. I am forever indebted to DigitalOcean for creating a super easy platform for someone who has no clue about VPS starting out. I know how to operate a linux machine but not the slightest idea about how to host a website myself until I came across DigitalOcean and their LAMP/LEMP tutorials.
Once I was comfortable with DigitalOcean, I tried launching a VPS on AWS and holycrap it was so insanely complicated. Within 10 mins of creating an AWS account, I was out. I understand that there is nothing wrong with AWS - it is not for me, but DigitalOcean has fullfilled my needs in the most perfect way with a huge knowledge base and detailed tutorials.
Hi! I'm a member of the Community team at DigitalOcean. I wanted to thank you for your kind words about our tutorials. This kind of feedback means a lot to us. We're glad we could help you get your web site set up.
Thanks for the tutorials and for keeping them up to date. I’m probably not their target audience for the most part, but when I need to do something in an unfamiliar stack, [stack name] + digitalocean is usually my first search. Wish you guys had a little more of a professional oriented products (think AWS/GCP) and no ‘max 10 servers’ kind of rules so I could use it.
You can contact their Support to get that increased. Just guessing at the reason, but if there was no limit, what happens if someone hacks your account and spins up a 100,000 node cryptocurrency mining farm?
The same thing applies to AWS, and AWS doesn't have '10 servers maximum' limit.
Beyond anything, it tells people about their target audience, which is indie development. That's fine, and it's a great market to be in. But in the case I have to spin up 17 servers in 24 hours in three continents, I can't really afford to deal with DigitalOcean's support under that kind of stress. This doesn't happen often, but when it happens, it absolutely breaks you.
AWS most definitely has service limits that apply to all products including ec2 for this exact (and other abusive) reasons. In fact, the aws limits are even more convoluted and can hit at random if not tracked. More details here: https://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_ru...
Yeah, as I was building out some apps over the past year it was a game of ‘which account limit will I hit next’. Most of them require a support ticket to be raised, and justification.
Hey Rolleiflex - Thanks so much for being a DigitalOcean customer! We would be happy to increase your Droplet limit if you get in touch. Just visit the support link from your Cloud control panel to make the request or drop me a line directly (first name @).
For what it's worth, my account has a limit of 25 and I've never requested an increase. So I guess after some period of use and payment they trust you and increase your limit automatically?
I've been a DO customer for 5 years but I'm not sure when my droplet limit was increased.
FWIW, I find AWS limits confusing and seemingly random. Also, the fact that you can't limit total spending is _very_ unfriendly to (at least indie, as you point out) developers. I have no experience with DO though, maybe that will change with this offering.
Have you actually dealt with their support though? Your example of going from (seemingly) zero servers to 17 across 3 continents in 24 hours (indicating unforeseen absolutely incredible traction and growth) seems significantly less likely than getting a response from their support team increasing usage limits within the same timeframe.
As a Linux SysAdmin I love coming across a tutorial from DigitalOcean when Im searching for a howto because I can always be sure that they will be updated, well written and very complete. A big thank you to you and the rest of the team!
AWS is much more structured to treat infrastructure like cattle rather than pets. DO blends the line a bit, but in spinning up a ‘droplet’ you’re leaning more on the ‘pet’ type of thinking. I’m super bullish on DO - they have the API to act like AWS (to a degree) but the UI to support small/individual teams.
Agreed. DO's knowledge base changed the game for me. I went from never getting anything to work out when trying to build projects to having a plethora of tutorials on a wide range of topics that seem to always work out. DO's server service is awesome as well, but man I'm really really thankful for all their tutorials!
Hey there! I'm from DigitalOcean and wanted to thank you for sharing your experience around using our platform and community. I loved reading your comment and shared it with our team, who in turn would like to show you some love. We'd love to hear from you at sammy[at]digitalocean.com!
I am also a fan and a customer, but did you notice how many times their tutorials appear in search results ? Their SEO game is strong! That's how they got me.
AWS used to be easy, but over the last decade it's become a specialization. Every time I wander back to it, there's another layer of complexity in the way towards doing something simple.
I agree. It seems like a deliberate strategy. Amazon is trying to create a breed of highly paid AWS experts who will be keenly interested in promoting AWS, because their valued knowledge is provider-specific and not transferable. Similar to how Microsoft created all those MCPs who tried to push Microsoft tech everywhere regardless of how well it fit the task.
I’m a partner in a company in Puerto Rico. Last year immediately after Hurricane Maria hit Puerto Rico, I wrote all of our off-island service providers asking for any help they could provide. DO was one of the most generous responses. They donated 3 months of services based on our average billing.
We greatly appreciated it, and the nice note they sent me showed they’re not only great at providing a good product, but they’re eminently human as well.
I believe it’s small things like this that separate the under dogs from the Giants. The giants are good if you are a huge Corp like Apple, for the little guys DO is amazing.
Yep, you can find cheaper VPS out there but for reliability and ease of use it's hard to beat. As for AWS, it's hard to beat AWS for the scope of services they offer, but at the scale of a handful of DO droplets, it's just not worth the effort it takes in AWS.
I ran into some limitations with Terraform and at the time it didn't support Vultr, so I ended up writing my own provisioner. It goes a bit further with setting up DNS records as well and I rolled in some of my own Docker deploy stuff into it; although in retrospect I should have made that a different project.
Pointing this out... I've gotten a bit of basics working with vultr + terraform but it's not the most straightforward. I'm not the author, but an interested observer.
In my searches for vultr + terraform, your project never came up, but there seems to be some overlap or room for collaboration.
I don't see why, I switched from DO to Vultr and then to Hetzner because the price was 4-5 times higher. Maybe it is a bit more reliable, but when I can get five servers for the price of one, I can add a lot of redundancy...
This I don't understand, there's a million cheaper VPS providers. I can understand choosing DO cause it's hip and they have great tutorials, but price? Nope.
That and as a user of 12 different LowEndBox-style providers, DigitalOcean is only marginally more expensive, with way less downtime, faster support response, an easy to use API, integrations with everything, and even planned maintenance windows which are clearly communicated.
Most of the lower priced options are run on a more shoestring budget and get uptime measured in the 98s. Also, I find a lot more noisy neighbors with many LEB-style providers.
Huh, I haven't had this issue on any of their brands. They tend to catch and fix most issues before I become aware on the dedicated machines I have with them.
FYI OVH is a key contributor to OpenStack, they've made quite a few contributions to Ceph, sponsored LetsEncrypt, and host most game servers (and even Wikileaks!).
Adding to this, I once bought their hosted container service, and when I contacted them they told me they no longer supported this product. How?! Why was I able to order it in the first place?!
I am a long time user of OVH. I don't care about fancy UI; I have hardly used it to get anything done except pay bills. I just wish they open a data center in India. That's the reason I was looking at DO; they have a DC in Bangalore.
Bandwidth and power are too expensive in most of Asia, OVH's business model is founded on building DCs next to cheap, plentiful power, then peering at the surrounding IXes for bandwidth. Most of the local IXes in Asia do not have local incumbent carriers participating, and those incumbents want hellish rates to peer.
Routing in Asia will remain crappy due to these peering problems, and volume wholesalers will be few and far between so long as power is spendy.
CaaS-es (i.e. "things that present themselves as a Docker daemon or something like it") don't allow you to provision IaaS-level resources like VMs or disks, merely connect your containers to existing resources. When CaaS-es do allow you to provision stuff (like e.g. Hyper.sh does), they do it through a direct IaaS-level API that is separate from the functioning of the CaaS itself.
The major cloud providers' deployments of Kubernetes (and other server-side persistent-orchestrator systems, like the venerable CloudFormation) are deeply integrated into the cloud platform they're running on, such that the orchestrator itself can provision resources for a container to run on as part of deploying the container. This becomes important when elastically auto-scaling a container, because each container might need e.g. its own disk, and you can't create them ahead of time if you don't know how many you'll need.
This also means that, unlike a CaaS, k8s et al can manage the very cluster that k8s is running on, scaling it out to suit the size of the current/estimated workload.
Theoretically, you can bootstrap k8s on top of a vanilla CaaS—this is how minikube installs "using" your local Docker install, and this is how deployable PaaSes like Flynn and Deis work. But this approach doesn't supply k8s with the cloud-specific integration it needs in order to provision stuff. It might work if you're deploying against something with a standardized API like OpenStack; but none of the major cloud providers are compatible with such APIs, and so they need to build their own k8s plugins that call their IaaS-level APIs, to make k8s work on their clouds.
Or, to put all that another way: if there were standard IaaS-level APIs for k8s to hook into, Docker (and the CaaSes that either use or emulate it) would just hook into those APIs itself, and there would be no need for a higher orchestration layer.
tl;dr CaaS doesn't orchestrate the underlying infrastructure whereas k8s primary purpose is to create a cloud agnostic way to orchestrate containers and the infrastructure that they run on.
Kuberneted is designed in a way where stateless pods (collection of containers sharing the same ip and identity) are decoupled from stateful ones. There are concepts designed into K8S that allows components to attempt to self-heal.
For example, a pod that requires a postgresql pod to connect to will fail and crash out. The scheduler will start a new one. If the postgresql pod is up by that point, then the rescheduled pod will no longer crash.
As far as the network paths, one of the really cool things about pods running inside a k8s cluster is that they can access any of the other pods, even if they are on a different node. However, pods typically reference services (such as postgresql) by dns name. You specify the set of pods that belong to the service by label selector. This allows the pods to come up, tear down, crash, moved to another node, while the service maintains a stable point of contact. It is quite brilliant, and other orchestrators quickly tried to copy it.
Stateful workloads are still difficult. Each distributed stateful system has its own way of setup and teardown. What we will probably see are custom Operators designed for each distributed stateful system, coming out over the years.
Sorry, should've been more specific - looking for the same sort of 'provide a container and we take care of everything else' experience, but ideally for a cheaper price. I know I could get a micro instance and set up ECS on it, but it just seems like such a royal PITA...
Also like how my deploy script is basically 'build image; push image; aws ecs update-service'.
With my side projects that are running on a droplet, it feels like there's an incredible amount of additional setup that I need to do every time I add an additional project. Add the new site to the reverse proxy, setup a git server I can push to, set up post-receive hooks for the server, etc., etc.
How do you do that? I only know the traditional "install OS and handle devops yourself" model.
Having said that, I got a cheap and rather beefy server on Hetzner and installed Dokku on it, and I couldn't be more satisfied. It's like having my own Heroku for my low-traffic side-projects, almost for free.
EKS is no more a competitor to K8S than DOK8s is a competitor to K8s. The CNCF Conformance page[1] shows a link to a spreadsheet[2] which indicates there are currently 96 products by 82 different vendors, including 34 hosted platforms like EKS, which are all Kubernetes.
ECS on the other hand, is a so-called "vanilla" container service which provides its own abstractions and offers no suite for conformance, or compatibility with other vendors' offerings. I have not heard lots of people say great things about ECS. If I could say one nice thing, it's that there is probably less to learn about ECS than about Kubernetes.
For me, ECS having less to learn was its main appeal. You get integration with AWS load balancers giving zero down time deployment and its API to automate is very straight forward. I set it up 2 years ago and have barely touched it. I evaluated K8s at the same time and after a full day was left completely confused how to do the same thing even if I spent a full werk.
ECS has definitely got some oddities mainly in the task definition spec which is most a 1:1 with docket commands but with their own AWS stuff mixed in. Part from that the simplicity vs K8s was its biggest draw card. Lot has happened with K8s in two years I’d imagine, so same choice today might be a different story.
Off topic in terms of the article itself, but I just wanted to give some love to DO. We are very data
and processing heavy startup, using DO for more than 2 years. We did not experience any issues, super easy to manage, great performance, and super important for us - predictable cost.
I'm happy to see DO get some attention. I hope this means that really good hosting services like DO can still thrive in the age of AWS. DO seems to be doing well.
I agree. I like that Digital Ocean takes their time to get a new product offering right. It shows. Especially when you compare it to AWS, which we're in the process of moving away from.
Since I already had experience with Linux, the first time I used Digital Ocean, it worked intuitively the way thought it should. And I think DO's documentation is some of the best. They seem to take documentation very seriously, which is important when it's late at night and you're trying to figure out how to do something.
Digital Ocean. We compared AWS, Google Cloud Platform, and Digital Ocean. While the latter isn't an apples-to-oranges comparison like the other two are, we found that the price, ease of use, and reliability made it the best choice.
I'm not yet sure about support since they don't offer any phone support. But it can't be worse than Amazons where I once literally had to yell at the support rep to stop talking because he just kept repeating himself, over and over, and wouldn't let me move on.
I agree. I've been using them for years now and their uptime and product usability has been awesome and significantly better than other hosts I've used in the past.
I've been managing multiple docker apps (using docker-compose) on DO for years. Is there a guide I can use to transition my apps from docker-compose to k8s? I've dabbled in k8s, but am not an expert at all.
Kompose has been around for a little while, which essentially "compiles" your docker-compose.yml into K8S config files. However, Docker recently announced ability to deploy directly from docker-compose: https://blog.docker.com/2018/12/simplifying-kubernetes-with-...
My understanding is that you'll still need to do some work, especially if you are building via compose instead of pointing to an image on a registry.
Yes, there is still a lot of work that requires a lot of knowledge about k8s. Kompose up does not just work, but it seems like DO could make that a simple set of commands with better documentation.
I've found that kompose does not give me the "try before you buy" experience I was hoping for.
For example, on another major cloud providers, they allow me to push up
and deploy my container (stored in their registry) without even having docker installed on the client machine.
Of course, I would want to test and play with k8s before I used it in production. But, with kompose I still feel like I need to understand a lot about k8s. With docker compose I have almost forgotten everything but docker-compose build && docker-compose compose up
I think this is why Heroku was so popular. Just change git push to heroku push and try it out.
Since at least a few people align with my comment by up voting it, it seems like enough people would love an easy transition path from docker compose to k8s. This would be a killer feature for Digital Ocean: create a new cluster and then run a few well documented commands and your application is running inside DO on k8s. A guy can dream, right?
DO has always had the best documentation on just about everything technical; maybe this is just an opportunity to write up the steps? Those two documents so far, even the one from docker, still require a lot of extra reading for someone who has used docker compose for a long time.
Serious question:
Is there an emerging cross platform workflow language to just write stuff to run on any cloud/container hosting setup?
The idea would be to be portable, avoid vendor lock-in and take advantage price differences or quickly route around a system failure in one of the providers.
Each machine that's spun up is built from scratch via one command-line call. The first half of the process interacts with each hosting API (we rely on DigitalOcean, Linode, and Vultr primarily), to build a clean slate machine with all of the packages and libraries that we expect.
The second half of the process runs the actual build process, building the instance step-by-step on top of the clean slate, blissfully unaware of which hosting provider it lives on.
This model allows us to be portable and avoid vendor-lock in, and a cross-provider infrastructure lets us gracefully handle system failures while keeping costs down.
I made something similar and turned it into a service [1] focused on WordPress, but unfortunately there hasn't been much interest from people as I thought there would be, though that could be due to my lack of marketing.
My goal was the same, to make hosting more portable with features like snapshotting and restoration of WP sites across servers and to even eventually expand beyond just servers, to bring in domain registrars and cloud storage to be able to move things around easier. For example: you have a site hosted on AWS EC2 with DNS at Namecheap and nightly backups at Dropbox and let's say the AWS Virginia region goes down. You create a new server in Digital Ocean and restore the snapshot from Dropbox and the linked DNS at Namecheap is auto updated.
The more I thought about this though, I began to realize that maybe these features wouldn't be useful to the audience I wanted to target, which was people who wanted to grow from shared hosting and have something reliable and less noisy neighbors, but still more affordable than managed WP hosts and lastly more control (bring your cloud/server provider).
some unsolicited feedback: your name is terrible(unrelated to your product in any way) and your website doesn't communicate the problem you say you're solving
all i get from your sites landing page is "wordpress hosting" which is not exactly uncommon
scrolling to the bottom shows me some cloud providers. makes me think you just help people host wordpress in the cloud
No, unfortunately, open-sourcing it has been on my to-do list for an embarrassingly long time.
But, building one is easier than it sounds! Think of the problem in two parts. First, find a distro used by multiple providers (we're on CentOS) and craft one script that uses each provider's API to spin up a clean machine.
Once you have that done, it's a matter of understanding your own build process, writing a script that you'll pass into each instance on creation that will fetch your source control, install libraries, and put all the pieces together.
Lots of if/else, lots of curl, lots of yum, lots of jq, but all of it is really straightforward.
Also if your providers + OS support cloud-init, then you can express a fleet of instances which run this sort of script at boot time in something like Terraform pretty easily. Switching clouds becomes "uh... what does <provider> call their <size> instance again?"
Alternatively, pre-baking cloud images that have already run such a script and are ready to boot becomes pretty easy with a tool like Packer.
Though, as the underlying OS changes, you'll need something to validate your scripts' functionality against, and a tool that's a little more declarative might make them less fragile to those changes.
Certainly understandable -- I'd prefer to keep things simple and just have some kind of validation in place rather than rely on an abstraction if I can get away with it.
Having maintained various automation over the course of the past decade and a half, I can say things do change around. Over the course of only a few years though, obviously you can stick to some LTS release of whatever you're using and be pretty confident that e.g. "some-package" does not get renamed to "some-package-version" or split into "some-core" and "some-utils", or have a package get upgraded to a version with some less-than-backward-compatible configuration options, etc.
There's nothing special about our stack. We have four different instance types (static, api, db, proxy), and we rely on a lot of the usual suspects: Apache, Tomcat, MySQL, Varnish, and HAProxy.
Is there a good way to sandbox terraform configurations? I'm not directly involved (just hear the screaming) but everything I'm hearing is that making modifications is a test of willpower.
For us it's been about as transparent as a brick wall and I'm not clear if that's down to our bureaucracy or built into the design. Both are anathema to the goal of making complex deployments straightforward and self-describing (you can't manage something this complicated unless big parts of it are as obvious as can be).
The recommended way, at least for AWS, is to have multiple accounts. One for production, and then however many more for test and development. Separate accounts let you run TF changes and know you will not impact production.
TF can be tricky to grok at first especially if you don't have everything in TF. But, I couldn't imagine managing more than a server or 2 without it or something similar at this point. Once you get into VPCs, IAMs, etc..., some type of tool is really required.
I'm also a little confused about your transparency comment. IME, tf is very clear what it is going to do in a plan. The current state files are also just json, and easy to read/search if you're not sure about something.
Declarative syntax is notoriously hard to debug, especially for newbies.
As a general rule, if you're giving someone a tool that uses declarative syntax, you also need to provide them a private (not shared) sandbox in which to test out theories, try new things, and reproduce errors seen in production.
Since we don't have that, TF is pretty much the worst solution for our problems. Kube or even Docker Swarm would serve us much better.
Vultr allows floating IPs for IPv4 and IPv6, but Digital Ocean only has floating IPv4. Vultr will start a machine with a floating v4, but you have to add a floating v6 address (giving you two v6 addresses). Digital Ocean does the same thing with v4 (giving you two v4 addresses). They both have different network adapter names, so you've got to configure those per provider as well.
Terraform and my own thing help in easing the transition if you ever need to move, but modifications will still have to be made.
Yes and No. For example, "type: LoadBalancer" works fine on almost every cloud, but various annotations need to be added for SSL termination on an AWS ALB, for example. The annotations don't collide tho, so you can have a load balancer with both AWS and Google Cloud annotations, and it will work fine on either cloud.
Volume classes are probably the best example of being cloud-specific, but this problem is solved by having a different volume class for each cloud provider, named the same, such that the deployment can always grab a disk regardless of which cloud its living in.
They are. Kubernetes has abstractions at exactly the right layers (e.g. Service to create a load balancer) so that you can exchange configs between cloud providers.
There can of course be some difference in the capabilities that each cloud provider supports (e.g. not all load balancer implementations may support UDP) but the abstraction is definitely there.
I thought load balancers popped out the end of Services and it was plugins that handled the specific cloud environment? I'd say that still constitutes cross platform.
We have a big roadmap for 2019. Queues are interesting and so are functions in general. Nothing to share today but those are items we are assessing for future roadmaps :)
You could use Hephy Workflow (you might have heard of Deis Workflow?) on DOK8S to do that.
This is an open-source fork of Deis Workflow.
We're hoping to add support for Digital Ocean's Kubernetes and ancillary services like storage in the next release. Preliminary testing indicates that it is very much possible. The only dependencies of a prod Workflow deployment are, any compliant Kubernetes, Load Balancers, and an S3-api compatible storage, all of which are available on DO now.
The latest Hephy release does not yet support general S3-api compatibility with arbitrary S3 providers like DO, but DigitalOcean is our first target platform for expanding the supported offerings. The pull requests are open right now.
(Currently supported platforms include GCP/GKE, Amazon, and Azure AKS.)
I use DO for my personal website and pet projects and it works well.
However, I am curious if any medium to big-sized tech companies are using DO in production. As far as I know, everyone is using AWS, GCP or Azure. What's DO's target audience?
I am kinda confused... On the one hand, most people here seem to be fans of the DO services and praise their simplicity, on the other hand, I see their page an wonder what they are offering...
The names of their services seem to be equally confusing as the AWS names. Yes, overall their portfolio is closer to the actual use-cases (as in 'I want to have a blog' -> they have an offer for that), but I am still wondering what a droplet is (looks somehow similar to a Virtual Private Server).
When Hetzner released their cloud service earlier this year, I tried it, loved it and still do. Sure they don't offer the same products (e.g. no S3/Spaces), but at least they use established technical terms instead of some made up marketing names you have to learn for again for every new cloud hoster you want to try.
You are probably correct. Their product range is quite limited and probably qualifies as IaaS. But on the other hand, everything fits very nicely together (e.g. adding a backup plan for your servers is just a matter of a few clicks and if you don't like clicking through a Web interface: Their API is quite reasonable and easy to use too).
I hear great things about DO and I really want to try it out but DO doesn't accept payment from our country. The same $5 droplet costs $25 here. I really hope you guys expand to the developing countries.
I've been a huge fan of digital ocean ever since I started renting a 5 dollar vps several years ago.
Their UX consistently is easy to navigate, has great documentation, and looks great as well.
I may not be in the category of users that requires or needs many of the features they've released, but I'm consistently impressed by how easy it is for me as a non devops engineer to grok exactly what each new feature they release is.
This looks super neat, I don't have any need for kubernetes as a small time vps consumer, but always happy to see them move forward in this manner.
Usually, I'm never satisfied with products/services and always wonder how they managed to screw up. To counter this behaviour I created a list with things which just work and I have nothing to complain about. DO is on this short list.
As someone that got the k8s invite and have been experimenting with it on DO, I just want to say that I like and this was the main reason I decided to stay instead of leaving for GCP
Man, judging from your other comments, you must really have a bone to pick with DigitalOcean.
For one, AWS wasn't the first cloud provider to offer managed kubernetes. Two, every major cloud provider pretty much offers some sort of k8s offering. Third, EVERY cloud provider is trying to play catch up to AWS, that's not specific to DigitalOcean.
"Linode fails to offer basic cloud products that every other cloud provider has." FTFY
Yes, this is the pricing model for everyone except EKS/AWS as I understand it. Manager nodes are bundled with whatever you spend on your worker nodes.
Google has gone so far with GKE as to offer HA masters distributed across availability zones at no extra cost. (On the day that Amazon announced EKS general availability, if I remember correctly, which is priced at $250/mo base cost, before you even get around to spend anything on worker nodes.)
Just want to call out that our worker node pricing is the same as our Droplets (servers). There is no price markup on using our managed service. In fact it's cheaper than deploying it yourself on DO because you don't have to pay for the master node.
I've been using Kops with Digital Ocean for some time on-and-off, comparing it to the new managed offering which I've been using in limited release, and it works great (either way).
The main disadvantage of Kops being (besides that it's Alpha only, and not managed), I will pay for all of the nodes I use. It should be clear that managed k8s offers a direct cost savings pretty much everywhere it's offered.
(It would be clear, if AWS was not currently leading the broader market and offering EKS with a price model basically contradicting every other vendor's.)
This is a huge number compared to the competition, but also a rounding error when it comes to the monthly infrastructure spend expenses of Amazon's target market here.
I mean, to be fair, that's a really reasonable price for a HA cluster. If you ignore the pricing models of literally all of the competitors' offerings.
You can have a Kubernetes cluster for about $15/mo for your personal project on GKE, if you can cope with several f1-micro or a single g1-small instance hosting your workloads. That's the cost of the nodes, and that's the all-in price. Prices scale up linearly for greater capacity, just add more nodes. (Then of course I guess networking, traffic, and additional storage can also add to the costs...)
If you are comfortable with Kubernetes, you should not be priced out of the market, even for hobbyist projects; the ecosystem is too valuable. I keep saying that Amazon really does not want their customers to use Kubernetes, and it shows in their market offerings. Only Amazon charges this premium for managed clusters, and they don't even seem to recommend using it in the keynote talks I've heard mentioning EKS. "Unless you know you need Kubernetes" is a great way to stop the discussion about adopting new tech.
If you are not already comfortable with Kubernetes, then the primary obstacle to your using K8S is that. The cluster pricing issue is a problem for people who are hyper-focused on Amazon, only.
If you want to use your worker nodes as load balancers via Ingress, to do this on the super cheap without provisioning any Load Balancers, then you can also do that. (FWIW, DigitalOcean charges for load balancers too, and you can avoid spending on using them in the same ways. I think they are cheaper though...)
The thing to look up is nginx-ingress settings for DaemonSet and HostNetwork mode. The settings to use might be slightly different on GKE. I can give you the one-liner I use to make it work on DO/Kops, here:
That last setting about NodePort may be extraneous, I think you can skip it... actually now come to think of it, I think that is the part that prevents the ingress from provisioning a Load Balancer in front of itself.
Note of course, that there is a reason why (it is the default and) you may be inclined to purchase a load balancer, as doing it this way is fairly likely to turn out to be not only less reliable, but also super inconvenient in a lot of ways. Not "nasal demons" inconvenient, but...
Can you give some insight on whether the master nodes are tiered and if so, how they are tiered. My DO master node doesn't respond to commands as quickly as GKE's but I don't know if that's because it's dependent on the tier I chose for the nodes in the node pool.
I’m just sitting down to do a new startup, but I’ve been out of the devops game for a few years. I feel behind.
What tools/platforms/hosts should I use?
The system will be your standard API+Database+Event bus+workers. I’m a fan of digital ocean and I’ve never bothered to learn AWS (besides S3). I’m very familiar with docker compose, but I’ve never gone deeper than that.
This is a first year startup, we aren’t cost constrained but we are extremely time sensitive.
Should I use Kubernetes? Or Is there something easier that will better serve us the first year?
I think it kind of depends on what you're familiar with. If you're a rails dev and you can build something crazy fast with rails vs anything else I'd just do that. If you're used to working in event-driven microservices environments, then do that. I'm working in an environment using node microservices, mysql, rabbitmq, with k8s and it works really well. I wouldn't say we're _faster_ because of k8s, but k8s really helps us move quickly once we get a service deployed to the cluster.
I'm also working on a start up, and chose to start with heroku and a PHP monolith (with a handful of microservices to do some of the heavy lifting) because those are the things that allow me to move fast. If we ever make some money and the product does find market fit, we'd probably move to something like k8s, but it definitely isn't a part of the early stages for us. YMMV /shrug
Hopefully they offer easy upgrades and high availability. I always loved the simplicity of their services.
Would also love a way to deploy 1 app to multiple locations with ease
Sorry if off-topic: How does DO compare to Linode? I have lots of experience with Linode but since I hear good things about DO I would love to try it out.
It is good and all that they provide more services but why can't they provide the bread and butter of IAAS: virtual networking (aka VPC) - the ability to set up a virtual router and other nodes inside a private network. We are DO customer currently and need to hack around this limitation for quite a while now and it is the main reason we want to switch away.
DO's "private" networking was not even truly "private" previously as it was shared among its customers. Only recently did it get to the point that the "private" network is separated from the rest. Anyway, even the new "private" network does not allow for something like installing a custom DHCP server and configuring custom subnet for the nodes inside. One of the most common use cases is to route outbound traffic from all the nodes inside a private network through a public gateway and DO's current configuration does not allow that.
I used to view Digital Ocean as kind of a play toy, good for experimenting and not much else, but these days they're a key player for sure and they've been a super reliable VPS host. Can't wait to try out some container stuff.
DO is great, I like it very much for personal project. But it's worth give a try Google cloud as it will remind me there is a cutting edge cloud service there, just in case I will need them someday.
Hmm, now all it needs is just a comprehensive tutorial for somebody who ignored the whole container fuss so far (happy with Ansible). How to get from 0 to 100 to use Kubernetes?
It's a hard nut to crack. What I've done myself is to jump into any book published about Kubernetes and to do some online training through a couple of different MooCs. This may be a good starting point:
I've seen this list before and it is super comprehensive. Thanks for linking it; I need more like this for my "extreme breadth of choices" slide, when I present to my coworkers who are not using k8s yet, to emphasize how many choices there actually are.
Why not try a cluster with a smaller scaling group? You can create a cluster with only one node in it, but what is it that you are trying to do on top of your Kubernetes? In my experience with growing clusters, you probably want to scale your per-each individual node size up before you want to scale up the number of nodes in your cluster. (You might even find that you really need only one big node, say for your databases, and want to build a heterogeneous cluster with an autoscaling group of little nodes and that one big node. That's a possibility with node pools on DO K8s.)
An ideal cluster size for me is probably 5 nodes with ~8-16GB RAM each. You could make it still worthwhile to do the cluster thing with probably only 2 nodes at ~1-2GB each, but that'd be pushing it.
I am practiced at making clusters cheap, actually I once was published in the Deis blog, an article about how to deploy Deis v1 PaaS in a highly available fashion for as cheap as possible.
Many of those lessons from nearly a year of research that I did on the topic prior to that publishing, still apply on modern Kubernetes clusters; but many of them don't, and still others are out the window completely on these managed environments, where now it seems possible to get pretty much the same idea of "High Availability" as I was aiming for, but for much cheaper and with better guarantees.
For instance, since you are not running etcd for yourself (it runs under the hood, on the management plane) there is no specific rule that says you must have at a minimum 3 or preferably 5 nodes to keep a stable cluster anymore. This was the basics of learning to wield CoreOS and Fleet 101!
Consensus is handled on the masters, and that consensus is subject to split-brain problems, so this knowledge is still important, but you don't need to have it yourself. In many more basic clusters with managed systems like GKE and DOK8s, this knowledge is practically reliquary! Two nodes may ensure that one is there to pick up the slack when the other has a fault. Exactly how you'd imagine it should work without a Computer Science degree. But with two nodes, ... since you'll probably never see a fault like that ... and the whole environment is self-healing, even if one happens on your watch, might never even have to know about it.
I noticed this as well. I think they are probably still evaluating where to start it at. In all honesty $10 nodes are very fair. I had a semi-poor experience with $5 nodes (for masters, at least) when I used kubeadm on DO. The $15 2cpu/2gb is probably the sweet spot for this. Although the $5 would be nice to start for just messing with some workers.
Were they ever 'new' in anything? I use them for small personal projects (they've gotten a lot more stable recently). I never thought of them as an innovative cloud provider but just one that was cheap and easy.
Same here. IME, DO is a great choice for many projects, because it's inexpensive, straightforward and reliable. I wouldn't trade any of those 3 attributes for "innovative".
To me they are closer to the classic hosting companies, the ones where you can get a "Virtual Private Server" for $20/month rather than "Some Ether" for $0.0000001 / weird unit.
There was a window where they were a nice option for small SSD backed instances when EC2 was still doing low performance spinning disks for their bottom tier.
Like many others, I'm not sure DO's USP is being at the cutting edge. I like them because I get a decent amount of control, I have found their support to be quite responsive, and their products have always been very stable for me.
Echoing @gtf21 and @chrisweekly: in short, us taking out the complexity in using and scaling with cloud capabilities and making learning easy for all developers has been our innovation. We don't always have to be the first to launch to add value to millions of developers around the world :)
This would be nice if the company didn't ignore all of the spam which comes from their network and the spamvertized sites they host. They deliberately ignore reports sent to their abuse address and attempt to avoid responsibility by making people who want to report abuse jump through hoops to break down spam and submit it in a web form.
Companies which protect spammers will never get any business from me, plus their email reputation is already pretty crappy, so why would I ever want to run containers on their networks?
Love you guys, I've had a private VM for years, at a good price, with great availability, that does exactly what I need. I also really like the firewall I can adjust through the web interface.
Did you try to launch your own cloud offering on someone else's cloud? Did you expect the other cloud would never expand their offering?
I doubt they ever tried to compete with you, and probably didn't even know you were doing something similar. You were just able to come to market before they felt they were ready with a similar product.
This is implying that you or others were creating managed products using Droplets? How is it DigitalOcean's fault if they want to create more vertically integrated products using their own technology?