Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So many gotchas like this in dockerfiles. I think the issue stems from it being such a leaky abstraction. To use it correctly you need to know how docker works internally inside and out, as well as Linux inside and out.

The default choices are baffling in docker, it really is a worse-is-better kind of tool.

Has anyone worked on a replacement for dockerfiles? I know buildah is an alternative to docker build, but it just uses the same file format



Sure, there are, but they all have enough of a learning curve that they don't seem to take hold with the masses.

Nix, Guix, Bazel, Habit, and others, all solve this problem more elegantly. There are some big folks out there, quiet quietly using Nix to solve:

* reproducible builds

* shared remote/CI builds

* trivial cross-arch support

* minimal container images

* complete knowledge of all SW dependencies and what-is-live-where

* "image" signing and verification

I know docker and k8s well and it's kind of silly how much simpler the stack could be made if even 1% of the effort spent working around Docker were spent by folks investing in tools that are principally sound instead of just looking easy at first glance.

Miss me with the complaints about syntax. It's just like Rust. Any pain of learning is very quickly forgotten by the unbridled pace at which you can move. And besides, it's nothing compared to (looks at calendar) 5 years of "Top 10 Docker Pitfalls!" as everyone tries to pretend the teetering pile of Go is making their tech debt go away.

I never thought I'd come around to being someone wary of the word "container", as someone who sorta made it betting on them. There is so little care for actually managing and understanding the depth of one's software stack, well, we have this. (Pouring one out for yet another Dockerfile with apt-get commands in it.)


Docker provides a solution for balls of mud. You now have a more reproducible ball of mud!

Bazel and company require you to clean up your ball of mud first. So your payoff is further away (and can sometimes be theoretical)

Ultimately it’s less about Docker and more about tooling supporting reproducibility (apt but with version pinning please), but in the meantime Docker does get you somewhere and solve real problems without having to mess around with stuff too much.

And of course the “now you have a single file that you can run stuff with after building the image ”. I don’t believe stuff like Nix offers that


> And of course the “now you have a single file that you can run stuff with after building the image ”. I don’t believe stuff like Nix offers that

Yes it does? Also, any nix expression can trivially be built into a much more space efficient docker container.


Can you generate a tar file that you can “just run“ (or something to that effect)? My impression was that Nix works more like a package installer, but deterministic


With the new (nominally experimental) CLI, use `nix bundle --bundler github:NixOS/bundlers#toArx` (or equivalently just `nix bundle`) to build a self-extracting shell script[1], `...#toDockerImage` to build a Docker image, etc.[2,3], though there’s no direct AppImage support that I can see (would be helpful to eliminate the startup overhead caused by self-extraction).

If you want a QEMU VM for a complete system rather than a set of files for a single application, use `nixos-rebuild build-vm`, though that is intended more for testing than for deployment.

The Docker bundler seems to be using more general Docker-compatible infrastructure in Nixpkgs[4].

[1] https://github.com/solidsnack/arx

[2] https://nixos.org/manual/nix/unstable/command-ref/new-cli/ni...

[3] https://github.com/NixOS/bundlers

[4] https://nixos.org/manual/nixpkgs/stable/#sec-pkgs-dockerTool...


There might be quicker ways to do this, but with one extra line a derivation exports a docker image which can in turn be turned to a tar with one more line.

Nix's image building is pretty neat. You can control how many layers you want, which I currently maximize so that docker pulls from AWS ECR are a lot faster


`guix pack` (with its various options) can produce an archive that you could run after unpacking anywhere.


> * trivial cross-arch support

Uhm, can't get Nix to build a crossSystem on MacBook M1, it fails compiling cross GCC. I wouldn't say it's trivial. Maybe the Nix expressions look trivial, but getting them to actually evaluate is not.


> it's kind of silly how much simpler the stack could be made if even 1% of the effort spent working around Docker were spent by folks investing in tools that are principally sound instead of just looking easy at first glance.

This is the phrasing I was groping around for. Thank you


> Miss me with the complaints about syntax. It's just like Rust.

Yeah and it's competing against Dockerfiles, which I suppose in this analogy is like Python or bash with fewer footguns; syntax and parts of the functional paradigm are absolutely putting nix at a usability/onboarding disadvantage to docker.


Never tried myself but this is the most serious attempt I’ve seen on alternative docker syntax https://earthly.dev/

You also have mockerfiles, being more of a proof of concept if I understand correctly https://matt-rickard.com/building-a-new-dockerfile-frontend/


Earthly is great (disclosure: work on it)

But also checkout out IckFiles, an Intercal frontend for moby buildkit:

https://github.com/adamgordonbell/compiling-containers/tree/...


You can also use buildah commands without the whole dockerfile abstraction. As a structured alternative there's also an option to build container images from nix expressions.



I've been using Packer with the Docker post-processor. I’ve had to give up multi-stage builds but being able to ditch Dockerfiles and simply write a shell script without a thousand &&\’s is more than enough reason to keep me using it.


> I’ve had to give up multi-stage builds but being able to ditch Dockerfiles and simply write a shell script without a thousand &&\’s is more than enough reason to keep me using it.

I don't understand your point. If all you want to do is set a container image by running a shell script, why don't you just run the shell script in your Dockerfile?

Or better yet, prepare your artifacts before, and then build the Docker image by just copying your files.

It sounds like you decided to take the scenic route of Docker instead of just taking the happy path.


I’m glad you were able to infer so much from so little, but what it actually sounds like is that you don’t know how helpful it is to build with a system like Packer. As others have pointed out, Dockerfiles are full of gotchas, the incomprehensible mess that they become due to the limited format and the need for workarounds is only half the reason I use packer now. If you think Dockerfiles produce a “happy” path, then good for you, but you might first fix the COPY command and make sure it works for multiple files, with args, and ONBUILD, or any of the other warts sitting around in the issue list. We’re all waiting.

Meanwhile, I write packer files in HCL - a saner language and a saner format - without worrying about the way files are copied. Of course, it’s not perfect but I’d choose any of the other suggestions here before going back to Dockerfiles based on your optimism and the knowledge - that I already had but virtually every author of a Dockerfile ignores - that I can RUN a script. Thanks, but no thanks.


Agreed, but not sure the answer.

I run Linux as I always have. Building and running are super simple.

I feel like Docker was created more or less to let Mac devs do Linux things. Wastefully. And without a lot of reason, tbh. And of course, they don't generally even understand Linux.


Why would a technology built on top of cgroups, a feature only available in the linux kernel, be created to "let mac devs do linux things"? In fact, running docker on Mac was painful in the early days with boot2docker.


Just my experience on my team. The Linux guys were already building and running things locally. So the sales pitch so to speak from our team were the Mac guys saying 'hey, now we can build and test locally!', whereas the Linux guys just kinda found it a slight annoyance.

Things have certainly changed with the rise of kube, ecr, and the such. But in the time of doing standard deploys into static vms, it didn't make a ton of sense.


I encourage you to investigate where docker came from, and the rise of containerization in general. The notion that you have is rather misinformed and anachronistic. Competing against standard deploys onto VMS, especially using proprietary software is exactly why containerization gained a foothold.

Whatever this anecdote your team told you about Mac guys, this just has nothing to do with docker's, and containers in general, rise to fame. It wouldn't be until much later when Mac users were starting to rely on tools like Vagrant for development environments where docker was seen as an alternative to that. If your team were real linux guys, they probably would have already known about lxc, as well as all the other technologies that lead up to it: jails, solaris containers, and vserver, so seeing this as "some annoying mac thing" is especially puzzling to me.


You know, I try to be reasonable so, you're right - my initial comment was way too broad and dismissive.

I told a personal tale about adoption(not creation), which isn't exactly fair to the creators.

It's a slightly different and perhaps jaded view when a perfectly solid workflow is upended, and when asking why get responses like 'consistent OS and dependencies', which our vms already had, and 'we can run it locally', which half of us already did.

Admittedly, there is a lot of value in a consistent and repeatable environment specification(vs bespoke everywhere), being able to do so without needing to spin up vms, and yes - running linuxy things on Mac and Win, among other things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: