> Try to rebuild that thousand-dependencies app in three years from now and you'll see ;-)
This is your fault for expecting free resources to remain free forever. If you care about build reproduction, dedicate resources to maintain a mirror for your dependencies. These are trivial to setup for any module system worth mentioning (and trivial to write if your module system is so new or esoteric that one wasn't already written for you). If you don't want to do this, you have no place to complain when your free resource disappears in the future.
I agree. But I find two problems with your proposal:
1- Maintaining a mirror of dependencies can be a non-trivial overhead. In this app that I was working on, the previous devs had forked some gems on github, and then added that specific github repo to the requirements. But they did not do it for every dependency, probably they did not have time/resources to do that.
2- As a corollary to the above, sometimes the problem is not the package itself but compatibility among packages. E.g. package A requires version <=2.5 of package B, but package C requires version >= 2.8 of package B. Now I hear you asking "then how did it compile in the frist place?" probably the requirement was for package A v.2.9 and package C latest version, so while A was frozen, C got updated. This kind of problems is not solved by forking on Github, unless you mantain a different fork of each library for each of your project, but that's even more problematic than maintaining dependencies themselves.
P.S. At least for once, it wasn't "my fault", I didn't build that app LOL ;-)
> 1- Maintaining a mirror of dependencies can be a non-trivial overhead. In this app that I was working on, the previous devs had forked some gems on github, and then added that specific github repo to the requirements. But they did not do it for every dependency, probably they did not have time/resources to do that.
You've precisely identified the trade-off. You basically have three options. You can
1. Maintain a local repo of your dependencies (high effort)
2. No dependencies, include everything as 'first-class' code (lower upfront effort, but v. messy)
This problem is solved by mirror dependencies and pinning the versions. Even against a git repo, pinning to a particular sha is something that is possible.
Automatically upgrading versions (i.e. not pinning versions) in a production build is an anti-pattern.
These sound like problems incurred due to a previous lack of software engineering rigor. As an industry, when we encounter challenges like this we should be learning how to solve them without reinventing the wheel. Pinning versions and maintaining mirrors of dependencies (whether that's an http caching proxy for npm/pypi/maven/etc or keeping a snapshot of all dependencies in a directory or a filesystem somewhere) is something that any company requiring stability needs to take seriously.
Of course pinning the versions and identifying the particular commit in the Gemfile would have solved it, as long as it was done for every package, otherwise we are back at problem n. 2 in my post above.
In this particular case, there were just 3-4 requirements (out of more than 100) that were pointing to a git repo, and only one of them also specified a particular commit. The other "git-requirements" were just cloning the latest commit from the respective repo.
> Automatically upgrading versions (i.e. not pinning versions) in a production build is an anti-pattern.
We did not have access to a production version, only to a git repo, that's the very reason why we had to rebuild in the first place. I can imagine all versions were locked when the system went into production years ago.
There's more to dependency hell than "oops, the package disappeared." Try updating one of those dependencies because of a security fix, and finding that it now depends on Gizmo 7.0 when one of your other dependencies requires Gizmo < 6.0.
A maintenance programmer should be raising to management the risk if they do not have reproducible builds.
The issue isn't that the company's software has a dependency. The issue is that the company is taking for granted the generosity of others. If they did not get a reproducible build before, they should attempt to get one as soon as they are aware of the problem. If the package is no longer available, they must now accept the punishment in terms of lost staff time or dollars to work around the lack of the dependency.
So, in the context of this discussion... you should make use of micro-modules to reduce code duplication, avoid defects, etc. However, don't expect those micro-modules to be maintained or available in the future; so you need to set up your own package cache to be maintained in perpetuity.
Or, you can implement the functionality yourself (or copy/paste if the license allows) and avoid the hassle.
I've been in the same situation as OP many times (although in most cases I've been brought in fix someone else's code).
In the Ruby ecosystem, library authors didn't really start caring about semantic versioning and backwards compatability until a few years ago. Even finding a changelog circa 2011 was a godsend.
I think this was mainly caused by the language itself not caring about those either. 10 years ago upgrading between patch releases of Ruby (MRI) was likely to break something.
At least this is one thing JavaScript seems to do better.
This is your fault for expecting free resources to remain free forever. If you care about build reproduction, dedicate resources to maintain a mirror for your dependencies. These are trivial to setup for any module system worth mentioning (and trivial to write if your module system is so new or esoteric that one wasn't already written for you). If you don't want to do this, you have no place to complain when your free resource disappears in the future.