Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I looked at this about a week ago and think it has potential. One thing I dislike, and it's an industry wide problem, is the eagerness to continuously (re)build containers on demand [1].

Everyone does it, but I think that's a mistake. What I want is something where I can build and publish a dev container to my local Docker registry and then use that container to develop until I decide I need to build an updated version due to changes in the OS, dependencies, etc..

To help clarify, look at this picture [2]. I'd want everything up to dependencies or resources, plus all of the tooling needed to make the Jetbrains Gateway, etc. work in the dev container. I want to build that container on a calendar based schedule (ex: daily) and have everything I need to develop accessible via local repositories that I can use without connecting to the internet.

Long ago I came to the conclusion that most Docker builds aren't repeatable, so the idea of re-building a consistent environment seems naive. For example:

    RUN apt-get update && apt-get install vim
Without specifying the exact version of every dependency, you won't be guaranteed the same version of 'vim' every time. Plus, even if you specify the exact version of your direct dependencies, I think you can still end up with varying transitive dependency versions. Even just the 'apt-get update' portion of that command is often misunderstood since it can return 0 as a result of transient failure.

So, even if your intent is to build a container with the most up-to-date versions of everything, a transient failure between the update and install commands can leave you with ancient versions of dependencies, even if you intended everything to be up-to-date. This is especially true if you're using a local APT cache like Sonatype Nexus where the upstream 'update' might fail and the local cache probably has old versions of all the dependencies, allowing the install command to succeed.

IMO it's better just to assume you have zero guarantees when (re)building Docker images and you're better off adopting a strategy of build, publish, use.

1. https://devpod.sh/docs/developing-in-workspaces/devcontainer...

2. https://phauer.com/2019/no-fat-jar-in-docker-image/#the-solu...



Reading the docs more, it looks like the prebuilt workspaces [1] are closer to what I would want.

1. https://devpod.sh/docs/developing-in-workspaces/prebuild-a-w...


> Even just the 'apt-get update' portion of that command is often misunderstood since it can return 0 as a result of transient failure.

Tiny correction that got me confused: an exit code of '0' would actually mean 'success', so you probably just meant that it could fail at any moment.

But also, this can happen any time external servers are accessed, regardless of the tool. An npm install could fail any time without warning, if the servers are down. Devs should expect their supply chain to break sooner or later, and plan accordingly depending on the severity of the consequences.

My example is if you use the Ubuntu debug packages repository, you'll basically never know if the next run will work until 'apt-get update' is run. Those repos are seemingly down or 'reindexing' for hours every few days.


> Tiny correction that got me confused: an exit code of '0' would actually mean 'success', so you probably just meant that it could fail at any moment.

No. It fails and then returns 0 = success. It's a design choice. [1] [2]

> But also, this can happen any time external servers are accessed, regardless of the tool. An npm install could fail any time without warning, if the servers are down.

I think that strengthens my argument that everything should be baked in to the dev container.

Edit: Reviewing those bugs, I see there's a new option to control the behavior:

    -eany, --error-on=any
           Fail the update command if any error occured, even a transient one.
1. https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1693900

2. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=776152#15


Have you looked into distrobox? I’m using it now for my dev environment running on top of an immutable OS (microos).

It’s not perfect but I never have to spin the container down.


> Have you looked into distrobox?

No. It looks a bit heavy for what I think is an ideal solution. The way this one works with Jetbrains Gateway is what really piqued my interest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: