Docker compose is a pretty poor development environment experience. Constantly having to rebuild containers to recompile dependencies; dealing with permissions differences for volume mounts; having to modify all the scripts to start with "docker compose run --rm"; having to deal with no shell history or dot files in the application containers... it leaves a lot to be desired.
> Constantly having to rebuild containers to recompile dependencies;
How often is this actually necessary? I've had projects that stick with the same dependencies for weeks/months and don't need anything new added outside of periodic version updates. There, most of the changes were the actual code, that was needed for shipping business functionality.
Furthermore, with layer caching, re-building isn't always a very big issue, though I'll admit that the slowness can definitely be problematic! Except for the fact that you don't have to pollute your local workstation with random packages/runtimes (that might conflict with packages for other projects, depending on the technologies you use and what is installed on a per project basis or globally), and the fact that you get mostly reproducible environments quite easily - both of those are great, at least when it works!
> ...dealing with permissions differences for volume mounts;
This is definitely a big mess, even worse if you need to run Windows on your workstation for whatever reason, as opposed to a Linux distro (though I guess WSL can help). I personally ran into bunches of issues when mounting files, that more or less shattered the illusion of containers solving the dev environment problem sufficiently: https://blog.kronis.dev/everything%20is%20broken/containers-...
But for what it's worth, at least they're trying and are okay for the most part otherwise.
That's quite the fast paced environment! In that case the shortcoming seems like a valid pain point, provided that you need to launch everything locally with debugging (e.g. breakpoints/instrumentation) vs just downloading a new container version and running it.
Is it really like that? I expected a docker container having all the deps except for running the project which would be something that’s mounted as a volume. Then each container would spin up its own watcher to build/test/serve the project. And have a bash open to run additional commands.
The first problem you'll encounter here is if you're using a language that keeps dependencies in a separate place (virtualenv, central cache location, etc.). You have to figure out how that works and mount that location as a separate volume, or else you'll be constantly recompiling everything when your container is recreated. Using a bind mount for the project files is also annoying because docker-compose makes no effort to sync your uid/gid, so you have all sorts of annoying permissions issues between local/container. And installing packages into your container that doesn't have an init process is... annoying at best. You can use sysbox to get one, but you're not really "just using docker compose" at that point.
That's the dream, but in my experience there are some thorns, and some things that just suck. Mostly these come from Windows, like a dev station using the wrong line endings, filesystem watchers not working if the project isn't on WSL storage, file permissions getting mucked up, etc. However, what is a pain in the butt is adding dependencies. Do you attach a shell and run npm in the container or try to do it on the host system. Do it in the container, and you'll have to make sure those changes make it's way back out, and that you rebuild the container the next time you launch it. Do it on the host, and you could run into cross platform issues if a package isn't supported on Windows, and you'll have to rebuild the container.
However, once you're aware of this, honestly it's not that big of a deal. Docker rebuilds are pretty fast nowadays, and you can use tools like just to make the DX a little easier by adding macros to run stuff in a container.
End of the day though, folks are all gonna have their own way of working, and I think dev containers could have an advantage for peeps doing remote development. It would be nice to have a system where our developers could dial in to a container with everything they need from anywhere they want to work.
But then how is this so different from running "docker-compose" and then do whatever you want withing the container? Is the difference just that they provide ready-made Docker images for certain environments so that you don't have to create your own? Can I get the same images on Dockerhub then?
+1, trying to put it my head (disclaimer - I'm a rare user of local docker, as not being developer), hoping to get some insights which may be helpful in better setup for dev team