Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Basically you recreate your personal base image (with the apt-get commands) every X days, so you have the latest security patches. And then you use the latest of those base images for your application. That way you have a completely reproducible docker image (since you know which base image was used) without skipping on the security aspect.


> Basically you recreate your personal base image (with the apt-get commands) every X days, so you have the latest security patches.

How exactly does that a) assure reproducibility if you use a custom unreproducible base image, b) improve your security over daily builds with container images built by running apt get upgrade?

In the end that just needlessly adds complexity for the sake of it, to arrive at a system that's neither reproducible nor equally secure.


If I build an image using the Dockerfile in the blog post 10 days later, there is no guarantee that my application would work. The packages in Ubuntu's repositories might be updated to new versions that are buggy/no longer compatible with my application.

OP's suggestion is to build a separate image with required packages, tag it with something like "mybaseimage:25032022" and use it as my base image in the Dockerfile. This way, no matter when I rebuild the Dockerfile, my application will always work. You can rebuild the base image and application's image every X days to apply security patches and such. This also means I now have to maintain two images instead of one.

Another option is to use an image tag like "ubuntu:impish-20220316" (instead of "ubuntu:21.10") as base image and pin the versions of the packages you are installing via apt.

I personally don't do this since core packages in Ubuntu's repositories rarely introduce breaking changes in the same version. Of course, this depends on package maintainers, so YYMV.


Whether you have a separate base or not, it relies on you keeping an old image.

The advantage a separate base has is allowing you to continue to update your code on top of it, even while the new bases are broken.

You could still do that without it though, just by forking out of the single image at the appropriate layer. Not as easy, but how often does it happen?


> If I build an image using the Dockerfile in the blog post 10 days later (...)

To start off, if you intend to run the same container image for 10 days straight, you have far more pressing problems than reproducibility.

Personally I know of zero professional projects whose production CICD pipeline don't deploy multiple times per day, or in the very worst case weekly in very rare cases where there is zero commit.

> OP's suggestion is to build a separate image with required packages, tag it with something like "mybaseimage:25032022" and use it as my base image in the Dockerfile.

Again, that adds absolutely nothing to just pulling the latest base image, running apt-get upgrade, and tagging/adding metadata.


Eh, that’s a heavy handed and not great way of ensuring reproducibility.

The smart way of doing it would be to:

1. Use the direct SHA reference to the upstream “Ubuntu” image you want.

2. Have a system (Dependabot, renovate) to update that periodically

3. When building, use “cache from” and “cache to” to push the image cache somewhere you can access

And… that’s it. You’ll be able to rebuild any image that is still cached in your cache registry. Just re-use a older upstream Ubuntu SHA reference and change some code, and the apt commands will be cached.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: