Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly, to an outsider Docker sure sounds like a world of pain.


To a developer it probably is, as a user, it’s much easier to install self hosted server apps with minimal effort. Especially because the Docker file usually already has the sane defaults set while the binary requires more manual config.


It's not too bad as a developer, either, especially when building something that needs to integrate with dependencies that aren't just libraries.

It may be less than ideally efficient in processor time to have everything I work on that uses Postgres talk to its own Postgres instance running in its own container, but it'd be a lot more inefficient in my time to install and administer a pet Postgres instance on each of my development machines - especially since whatever I'm building will ultimately run in Docker or k8s anyway, so it's not as if handcrafting all my devenvs 2003-style is going to save me any effort in the end, anyway.

I'll close by saying here what I always say in these kinds of discussions: I've known lots of devs, myself included, who have felt and expressed some trepidation over learning how to work comfortably with containers. The next I meet who expresses regret over having done so will be the first.


But I can save a different outside from a lot of pain. For example our frontend dev won't have to worry about setting up the backend with all it's dependencies, instead docker-compose starts those eight containers (all, redis, db etc) and he's good to go work on the frontend.

If you freelance and work on different project, sure rvm is a great thing, but docker will contain it even better and you won't litter your work machine with stuff like mine is after a few years.


If all you need is a statically linked binary running in a Screen session somewhere, then without question, you're going to find Docker to be esoteric and pointless.

Maybe you've dealt with Python deployments and been bitten by edge cases where either PyPI packages or the interpreter itself just didn't quite match the dev environment, or even other parts of the production environment. But still, it "mostly" works.

Maybe you've dealt with provisioning servers using something like Ansible or SaltStack, so that your setup is reproducible, and run into issues where you need to delete and recreate servers, or your configuration stops working correctly even though you didn't change anything.

The thing that all of those cases have in common is that the Docker ecosystem offers pretty comprehensive solutions for each of them. Like, for running containers, you have PaaS offerings like Cloud Run and Fly.io, you have managed services like GKE, EKS, and so forth, you have platforms like Kubernetes, or you can use Podman and make a Systemd unit to run your container on a stock-ish Linux distro anywhere you want.

Packaging your app is basically like writing a CI script that builds and installs your app. So you can basically take whatever it is you do to do that and plop it in a Dockerfile. Doesn't matter if it's Perl or Ruby or Python or Go or C++ or Erlang, it's all basically the same.

Once you have an OCI image of your app, you can run it like any other application in an OCI image. One line of code. You can deploy it to any of the above PaaS platforms, or your own Kubernetes cluster, or any Linux system with Podman and Systemd. Images themselves are immutable, containers are isolated from eachother, and resources (like exposed ports, CPU or RAM, filesystem mounts, etc.) are granted explicitly.

Because the part that matters for you is in the OCI image, the world around it can be standard-issue. I can run containers within Synology DSM for example, to use Jellyfin on my NAS for movies and TV shows, or I can run PostgreSQL on my Raspberry Pi, or a Ghost blog on a Digital Ocean VPS, in much the same motion. All of those things are one command each.

If all you needed was the static binary and some init service to keep it running on a single machine, then yeah. Docker is unnecessary effort. But in most cases, the problem is that your applications aren't simple, and your environments aren't homogenous. OCI images are extremely powerful for this case. This is exactly why people want to use it for development as well: sure, the experience IS variable across operating systems, but what doesn't change is that you can count on an OCI image running the same basically anywhere you run it. And when you want to run the same program across 100 machines, or potentially more, the last thing you want to deal with is unknown unknowns.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: