It's worth noting that the Docker experience is very different across platforms. If you just run Docker on Linux, it's basically no different than just running any other binary on the machine. On macOS and Windows, you have the overhead of a VM and its RAM to contend with at minimum, but in many cases you also have to deal with sending files over the wire or worse, mounting filesystems across the two OSes, dealing with all of the incongruities of their filesystem and VFS layers and the limitations of taking syscalls and making them go over serialized I/O.
Honestly, Docker, Inc. has put entirely too much work into making it decent. It's probably about as good as it can be without improvements in the operating systems that it runs on.
I think this is unfortunate because a lot of the downsides of "Docker" locally are actually just the downsides of running a VM. (BTW, in case it's not apparent, this is the same with WSL2: WSL2 is a pretty good implementation of the Linux-in-a-VM thing, but it's still just that. Managing memory usage is, in particular, a sore spot for WSL2.)
(Obviously, it's not exactly like running binaries directly, due to the many different namespacing and security APIs docker uses to isolate the container from the host system, but it's not meaningfully different. You can also turn these things off at will, too.)
I don't do backend work professionally so my opinion probably isn't worth much, but the way Docker is so tightly tied to Linux makes me hesitant to use it for personal projects. Linux is great and all but I really don't like the idea of so explicitly marrying my backend to any particular platform unless I really have to. I think in the long run we'd be better served figuring out ways to make platform irrelevant than shipping around Linux VMs that only really work well on Linux hosts.
IMO, shipping OCI images doesn't tether your backend to Docker anymore than shipping IDE configurations in your Git repository tether you to an IDE. You could tightly couple some feature to Docker, but the truth is that most of Docker's interface bits are actually pretty standard and therefore things you could find anywhere. The only real reason why Docker can't be "done" the way it is on Linux elsewhere is actually because of the unusual stable syscall interface that Linux provides; it allows the userlands ran by container runtimes to run directly against the kernel without caring too much about the userland being incompatible with the kernel. This doesn't hold for macOS, other BSDs, or Windows (though Windows does neatly abstract syscalls into system libraries, so it's not really that hard to deal with this problem on Windows, clearly.)
Therefore, if you use Docker 'idiomatically', configuring with environment variables, communicating over the network, and possibly using volumes for the filesystem, it doesn't make your actual backend code any less portable. If you want to doubly ensure this, don't actually tether your build/CI directly to Docker: You can always use a standard-ish shell script or another build system for the actual build process instead.
I don't think Linux is going away as the main server OS anytime soon, if ever. So that just leaves local dev
To that end- I prefer to just stick with modern languages whose first-party tooling makes them not really have to care what OS they're building/running on. That way you can work with the code and run it directly pretty much anywhere, and then if you reserve Dockerfiles for deployment (like in the OP), it'll always end up on a Linux box anyway so I wouldn't worry too much about it being Linux-specific
Yeah I'm not too worried about what's on the deployment end, but rather the dev environment. I don't want to spend any time wrestling Docker to get it to function smoothly on non-Linux operating systems.
Agree that it's a strong argument for using newer languages with good tooling and dependency management.
the dev workflow usage of docker is less about developing your own app locally and more about being able to mindlessly spin up dependencies - a seeded/sample copy of your prod database for testing, a Consul instance, the latest version of another team's app, etc.
You can just bind the dependencies that are running in Docker to local ports and run your app locally against them without having to docker build the app you're actually working on.
I've run Docker devenvs on Linux, Windows (via WSL2 so also Linux but only kinda) and Mac.
The closest I've come in years to having to really wrestle with it was the URL hacking needed to find the latest version willing to run on my 10-year-old MBP that I expect to run badly on anything newer than 10.13 - the installer was there on the website, just not linked I guess because they don't want the support requests. Once I actually found and installed it, it's been fine, except that it still prompts me to update and (like every Docker Desktop install) bugs me excessively for NPS scores.
It's, more or less, practically impossible to be OS agnostic for a backend with any sort of complexity. You can choose layers that try to abstract the OS layer away but sooner or later you're going to run into part of the abstraction that leaks. That plus the specialty nature of Windows/Mac hosting means your backend is gonna run on Linux.
It made sense at one point to use Macs but these days pretty much everything is electron or web based or has a Linux native binary. IMHO backend developers should use x64 linux. That's what your code is running on and using something different locally is just inviting problems.
The problem of course being that x86 linux on laptops is still and might always be terrible. Using an ARM Mac to develop your backend services is not ideal but probably still a better user experience than the 0.01% where a modern language does something vastly different on your local machine than in production (which is btw also very often ARM these days, at least on AWS).
I've used Ubuntu, WSL2 and currently a M1 mac and if I need to be mobile AT ALL with the machine I chose a Mac any day. For a desktop computer Ubuntu works great though
It's not as if you're locked to Linux though. Most if not all of my applications would run just fine on Windows if I wanted to. It's just that when I run them myself I use a container because I'm already choosing to use a Linux environment. That doesn't mean the application couldn't be shipped different but rather it is just an implementation detail
To a developer it probably is, as a user, it’s much easier to install self hosted server apps with minimal effort. Especially because the Docker file usually already has the sane defaults set while the binary requires more manual config.
It's not too bad as a developer, either, especially when building something that needs to integrate with dependencies that aren't just libraries.
It may be less than ideally efficient in processor time to have everything I work on that uses Postgres talk to its own Postgres instance running in its own container, but it'd be a lot more inefficient in my time to install and administer a pet Postgres instance on each of my development machines - especially since whatever I'm building will ultimately run in Docker or k8s anyway, so it's not as if handcrafting all my devenvs 2003-style is going to save me any effort in the end, anyway.
I'll close by saying here what I always say in these kinds of discussions: I've known lots of devs, myself included, who have felt and expressed some trepidation over learning how to work comfortably with containers. The next I meet who expresses regret over having done so will be the first.
But I can save a different outside from a lot of pain. For example our frontend dev won't have to worry about setting up the backend with all it's dependencies, instead docker-compose starts those eight containers (all, redis, db etc) and he's good to go work on the frontend.
If you freelance and work on different project, sure rvm is a great thing, but docker will contain it even better and you won't litter your work machine with stuff like mine is after a few years.
If all you need is a statically linked binary running in a Screen session somewhere, then without question, you're going to find Docker to be esoteric and pointless.
Maybe you've dealt with Python deployments and been bitten by edge cases where either PyPI packages or the interpreter itself just didn't quite match the dev environment, or even other parts of the production environment. But still, it "mostly" works.
Maybe you've dealt with provisioning servers using something like Ansible or SaltStack, so that your setup is reproducible, and run into issues where you need to delete and recreate servers, or your configuration stops working correctly even though you didn't change anything.
The thing that all of those cases have in common is that the Docker ecosystem offers pretty comprehensive solutions for each of them. Like, for running containers, you have PaaS offerings like Cloud Run and Fly.io, you have managed services like GKE, EKS, and so forth, you have platforms like Kubernetes, or you can use Podman and make a Systemd unit to run your container on a stock-ish Linux distro anywhere you want.
Packaging your app is basically like writing a CI script that builds and installs your app. So you can basically take whatever it is you do to do that and plop it in a Dockerfile. Doesn't matter if it's Perl or Ruby or Python or Go or C++ or Erlang, it's all basically the same.
Once you have an OCI image of your app, you can run it like any other application in an OCI image. One line of code. You can deploy it to any of the above PaaS platforms, or your own Kubernetes cluster, or any Linux system with Podman and Systemd. Images themselves are immutable, containers are isolated from eachother, and resources (like exposed ports, CPU or RAM, filesystem mounts, etc.) are granted explicitly.
Because the part that matters for you is in the OCI image, the world around it can be standard-issue. I can run containers within Synology DSM for example, to use Jellyfin on my NAS for movies and TV shows, or I can run PostgreSQL on my Raspberry Pi, or a Ghost blog on a Digital Ocean VPS, in much the same motion. All of those things are one command each.
If all you needed was the static binary and some init service to keep it running on a single machine, then yeah. Docker is unnecessary effort. But in most cases, the problem is that your applications aren't simple, and your environments aren't homogenous. OCI images are extremely powerful for this case. This is exactly why people want to use it for development as well: sure, the experience IS variable across operating systems, but what doesn't change is that you can count on an OCI image running the same basically anywhere you run it. And when you want to run the same program across 100 machines, or potentially more, the last thing you want to deal with is unknown unknowns.
Yeah I'm aware, which is why I qualified that part of the statement. But every company I've ever worked at has issued MacBooks (which I'm not complaining about- I definitely prefer them overall), so it is a pervasive downside
Though also, the added complexity to my workflow is an order of magnitude more important to me than the memory usage/background overhead (especially now that we've got the M1 macs, which don't automatically spin up their fans to handle that background VM)
The solution I’ve settled on lately is to just run my entire dev environment inside a Linux VM anyway. This solves a handful of unrelated issues I’ve run into, but it also makes Docker a little more usable.
It's worth noting that the Docker experience is very different across platforms. If you just run Docker on Linux, it's basically no different than just running any other binary on the machine. On macOS and Windows, you have the overhead of a VM and its RAM to contend with at minimum, but in many cases you also have to deal with sending files over the wire or worse, mounting filesystems across the two OSes, dealing with all of the incongruities of their filesystem and VFS layers and the limitations of taking syscalls and making them go over serialized I/O.
Honestly, Docker, Inc. has put entirely too much work into making it decent. It's probably about as good as it can be without improvements in the operating systems that it runs on.
I think this is unfortunate because a lot of the downsides of "Docker" locally are actually just the downsides of running a VM. (BTW, in case it's not apparent, this is the same with WSL2: WSL2 is a pretty good implementation of the Linux-in-a-VM thing, but it's still just that. Managing memory usage is, in particular, a sore spot for WSL2.)
(Obviously, it's not exactly like running binaries directly, due to the many different namespacing and security APIs docker uses to isolate the container from the host system, but it's not meaningfully different. You can also turn these things off at will, too.)