> "It remains virtually impossible to create a Ruby or Python web server virtual machine image that DOESN’T include build tools (gcc), ssh, and multiple latent shell executables."
At work, our tech team has found an interesting way around this for our Python app. We build out the virtualenv in the docker container, and then run our ansible-based deployments inside the same container. With that, our virtual environments are rsync'd to the app servers so we can avoid installing developer tools.
I'm ditching virtualenvs and going with old good Debian packaging and private APT repository.
For VMs/containers that already run a single application, except for some weird edge cases, there's really no point in having a virtual environment in a virtual environment.
I have initial success with a few simpler projects, now looking into transitioning more complex ones. Not sure whenever it'll go without any hassle, but seems worth trying. At worst, I'd just waste my time and return to virtualenvs.
One reason to keep virtualenvs is that the system Python (VM or container) includes extra Python packages that your app may or may not need. If you use a virtualenv, you exclude these system-installed packages and guarantee a clean starting point.
I find the problem with this approach comes when you want to ship a package (ie: requests) that is newer than whatever is packaged for your OS. Then you either have to repackage OS packages, taking on the corresponding duty to keep them patched. I am using wheel files in production for this very reason
Same here. As a bonus, it makes it easy to create a bare-metal OS installation image that includes your app.
I'm running a script right now that generates an ISO that turns a brand new machine into a server running our app with a template DB in completely unattended fashion.
I don't know what you consider an unwieldy chain of build steps, but for Ruby it's simply a matter of building the container, and run it with your app directory (or a suitable build location) mounted as a volume, install dev-dependencies then executing "bundler install --standalone --path vendor", and subsequently using that as the basis for building the final container image.
You can make that cleaner by making the build steps into one Docker image, the final app into a second, and have them share a base image that contains all the basic dependencies.
For Ruby at least the intermediate build step would typically only need to be re-run whenever your Gemfile/Gemfile.lock changes.
Yes, so the attack vector is still higher than other solutions outlined in the article, but we've managed to avoid installing stuff like gcc. It's also significantly cut down our deployment time, especially when updating libraries/requirements.
At work, our tech team has found an interesting way around this for our Python app. We build out the virtualenv in the docker container, and then run our ansible-based deployments inside the same container. With that, our virtual environments are rsync'd to the app servers so we can avoid installing developer tools.