If all you need is a statically linked binary running in a Screen session somewhere, then without question, you're going to find Docker to be esoteric and pointless.
Maybe you've dealt with Python deployments and been bitten by edge cases where either PyPI packages or the interpreter itself just didn't quite match the dev environment, or even other parts of the production environment. But still, it "mostly" works.
Maybe you've dealt with provisioning servers using something like Ansible or SaltStack, so that your setup is reproducible, and run into issues where you need to delete and recreate servers, or your configuration stops working correctly even though you didn't change anything.
The thing that all of those cases have in common is that the Docker ecosystem offers pretty comprehensive solutions for each of them. Like, for running containers, you have PaaS offerings like Cloud Run and Fly.io, you have managed services like GKE, EKS, and so forth, you have platforms like Kubernetes, or you can use Podman and make a Systemd unit to run your container on a stock-ish Linux distro anywhere you want.
Packaging your app is basically like writing a CI script that builds and installs your app. So you can basically take whatever it is you do to do that and plop it in a Dockerfile. Doesn't matter if it's Perl or Ruby or Python or Go or C++ or Erlang, it's all basically the same.
Once you have an OCI image of your app, you can run it like any other application in an OCI image. One line of code. You can deploy it to any of the above PaaS platforms, or your own Kubernetes cluster, or any Linux system with Podman and Systemd. Images themselves are immutable, containers are isolated from eachother, and resources (like exposed ports, CPU or RAM, filesystem mounts, etc.) are granted explicitly.
Because the part that matters for you is in the OCI image, the world around it can be standard-issue. I can run containers within Synology DSM for example, to use Jellyfin on my NAS for movies and TV shows, or I can run PostgreSQL on my Raspberry Pi, or a Ghost blog on a Digital Ocean VPS, in much the same motion. All of those things are one command each.
If all you needed was the static binary and some init service to keep it running on a single machine, then yeah. Docker is unnecessary effort. But in most cases, the problem is that your applications aren't simple, and your environments aren't homogenous. OCI images are extremely powerful for this case. This is exactly why people want to use it for development as well: sure, the experience IS variable across operating systems, but what doesn't change is that you can count on an OCI image running the same basically anywhere you run it. And when you want to run the same program across 100 machines, or potentially more, the last thing you want to deal with is unknown unknowns.
Maybe you've dealt with Python deployments and been bitten by edge cases where either PyPI packages or the interpreter itself just didn't quite match the dev environment, or even other parts of the production environment. But still, it "mostly" works.
Maybe you've dealt with provisioning servers using something like Ansible or SaltStack, so that your setup is reproducible, and run into issues where you need to delete and recreate servers, or your configuration stops working correctly even though you didn't change anything.
The thing that all of those cases have in common is that the Docker ecosystem offers pretty comprehensive solutions for each of them. Like, for running containers, you have PaaS offerings like Cloud Run and Fly.io, you have managed services like GKE, EKS, and so forth, you have platforms like Kubernetes, or you can use Podman and make a Systemd unit to run your container on a stock-ish Linux distro anywhere you want.
Packaging your app is basically like writing a CI script that builds and installs your app. So you can basically take whatever it is you do to do that and plop it in a Dockerfile. Doesn't matter if it's Perl or Ruby or Python or Go or C++ or Erlang, it's all basically the same.
Once you have an OCI image of your app, you can run it like any other application in an OCI image. One line of code. You can deploy it to any of the above PaaS platforms, or your own Kubernetes cluster, or any Linux system with Podman and Systemd. Images themselves are immutable, containers are isolated from eachother, and resources (like exposed ports, CPU or RAM, filesystem mounts, etc.) are granted explicitly.
Because the part that matters for you is in the OCI image, the world around it can be standard-issue. I can run containers within Synology DSM for example, to use Jellyfin on my NAS for movies and TV shows, or I can run PostgreSQL on my Raspberry Pi, or a Ghost blog on a Digital Ocean VPS, in much the same motion. All of those things are one command each.
If all you needed was the static binary and some init service to keep it running on a single machine, then yeah. Docker is unnecessary effort. But in most cases, the problem is that your applications aren't simple, and your environments aren't homogenous. OCI images are extremely powerful for this case. This is exactly why people want to use it for development as well: sure, the experience IS variable across operating systems, but what doesn't change is that you can count on an OCI image running the same basically anywhere you run it. And when you want to run the same program across 100 machines, or potentially more, the last thing you want to deal with is unknown unknowns.