Replaced Desktop with `colima` as well few months ago. I've been using it daily since then. I did not have any issue, sometimes I just delete / start a instance to upgrade the docker version, it only takes few minutes.
I like the fact that I decide when I upgrade, not Docker Desktop nagging me every week.
To chime in here, containers all the way. Started with buildpacks as that seemed like the ultimate zero-conf approach (like Heroku), but very slow, so switched to Containers. Works a charm.
How do you mean? I never touch the actual containers, I always just let Dokku take care of those, I just push the code with the Buildpack and Dokku handles the rest.
There is always a container in the end. But do you actually provide a Dockerfile along with your app code, or do you let buildpack create the container for you (ie. no Dockerfile in your source code)?
It is definitely not easy using "naked" open source Kubernetes, but it can be done. My company [1] packages Kubernetes-on-autopilot, which includes HA-configuration, single-click updates to the latest k8s version, etc.
We run it for our customers on their AWS clouds and even on enterprise infrastructure. So it's just with any OSS project: DIY-vs-invest-into-tooling.
A huge benefit of adopting Kubernetes for your SaaS product is this: it becomes much easier to push updates to enterprise environments. Selling SaaS to private clouds is pretty sweet: the size of a deal is larger but the incremental cost of R&D and ops (!) is now very low, because of standardized deployment models like k8s (what we do) or Mesos.
whats your opinion using plain vanilla Salt vs kube-up vs kube-aws vs the new kops ?
the problem with k8s is that there is way too much stuff happening now and it is hard to get started quickly.
For example, we started using Docker when it was alpha - for a single man startup.. getting up and started was very easy. On the other hand, using k8s on a 3 node cluster is... scary.
In my opinion, kube-up and other kubernetes-specific provisioning helpers were created with the goal of letting people of varying backgrounds to complete numerous Quick Starts and tutorials with minimal pre-requisites. They make no assumptions about user's expertise in system administration.
For production use, you should definitely embrace the tooling you have already invested in and make Kubernetes a part of it. How else would you integrate it with your existing identity management, monitoring, storage/backups, etc. I am no expert in Salt, but I'm assuming Ansible (my choice) is similar, then yes: building a solid set of Salt recipes would be my recommendation, or just hire us: we'll install and manage hundreds of clusters for you, one for each engineer, even. ;)
we had taken this conversation to mail, but this is what I wrote:
its an ecosystem question. Docker became wildly successful because a developer could start using it in production within minutes. In early stage startups there is NO difference between developers and devops. All of these developers who deployed their MVP apps on docker (like me) are the ones who are starting to pay for services like Codeship, etc. We have grown with the Docker ecosystem.
The question is whether k8s is giving us that flexibility ? I dont see that. For example, the next easy step up for people is Docker-compose: single host.. many services. And that gradually extends to docker-swarm.
Lot of people say I "don't need k8s for 3 node cluster", but I'm still stuck...because I need to do something.
So I have to use docker compose. And then I'm solidly locked into the docker ecosystem.
That's the problem - your value prop is great when someone has 100 clusters... But I'm asking you from an ecosystem perspective : how will you make sure someone like me will choose k8s at some point in the future, when it is unusable for me right now ?
Or do you believe that k8s is so vastly superior to anything else out there, that when I scale up enough... I will have no choice but to move to it (and then call you for help)
Software Engineer, Infrastructure Engineer, Web Designer
At Docker Inc, we believe that containerization will soon become the next big thing, the next tool which will be part of every developer and sysadmin toolbox. What's "containerization"? The name comes the LXC technology (Linux Containers), and the technique is also known as "Lightweight Virtualization".
That's why we launched Docker (http://www.docker.io/), an Open Source tool enabling anyone to run those Linux Containers very easily. Containers boot 1000x faster than virtual machines; their disk and memory footprint are also much lower; and they work on virtually all current platforms (from physical servers to public cloud instances). We think that they are the future of virtualization, and will soon become ubiquitous.
Convinced? Then fork the repo on github (https://github.com/dotcloud/docker) and have a look at the code. Not convinced? Then check the website (http://www.docker.io/), which contains more details, demos, and screencasts. Excited about this? Then join our engineering team!
Your responsibilities will include:
- being a contributor to the Docker project, which means contributing patches, and reviewing and merging pull requests from the community;
- working on some server-side applications; participate in product discussions, influence the roadmap, and take ownership and responsibility over new projects to make them happen.
You can qualify if you...:
- can read and write Go code (because docker itself is in Go);
- can read and write Python code (because many tools and services built around Docker are in Python);
- are familiar with network protocols: the lower layers like IP, TCP, and UDP; and the higher layers like HTTP;
- have experience in scaling large applications;
- believe that writing unit and functional tests is important.
Software Engineer, Infrastructure Engineer, Web Designer
At Docker Inc, we believe that containerization will soon become the next big thing, the next tool which will be part of every developer and sysadmin toolbox. What's "containerization"? The name comes the LXC technology (Linux Containers), and the technique is also known as "Lightweight Virtualization".
That's why we recently launched Docker (http://www.docker.io/), an Open Source tool enabling anyone to run those Linux Containers very easily. Containers boot 1000x faster than virtual machines; their disk and memory footprint are also much lower; and they work on virtually all current platforms (from physical servers to public cloud instances). We think that they are the future of virtualization, and will soon become ubiquitous.
Convinced? Then fork the repo on github (https://github.com/dotcloud/docker) and have a look at the code. Not convinced? Then check the website (http://www.docker.io/), which contains more details, demos, and screencasts. Excited about this? Then join our engineering team!
Your responsibilities will include:
- being a contributor to the Docker project, which means contributing patches, and reviewing and merging pull requests from the community;
- working on some server-side applications;
participate in product discussions, influence the roadmap, and take ownership and responsibility over new projects to make them happen.
You can qualify if you...:
- can read and write Go code (because docker itself is in Go);
- can read and write Python code (because many tools and services built around Docker are in Python);
- are familiar with network protocols: the lower layers like IP, TCP, and UDP; and the higher layers like HTTP;
- have experience in scaling large applications;
- believe that writing unit and functional tests is important.
We build Docker: www.docker.io, an Open Source tool enabling anyone to run those Linux Containers very easily. Containers boot 1000x faster than virtual machines; their disk and memory footprint are also much lower; and they work on virtually all current platforms (from physical servers to public cloud instances). We think that they are the future of virtualization, and will soon become ubiquitous.
reply