> But well before you get to Google scale, ... doesn't have to change in lockstep.
Not updating dependencies is the equivalent of never brushing your teeth. Yes, you can ship code faster in the short term, but version skew will be a huge pain in the future. A little maintenance every day is preferable to ten root canals in a few years.
As you scale a small company its exceedingly rare to not need 10 root canals along the way. Meanwhile it's exceedingly common to need to pivot quickly even if it comes at the cost of near-term engineering rigor.
I feel obliged to point out that I work at a company that uses a monorepo, so this isn't a "never use monorepos" counter-post. Instead my points are borderline tautological:
There's a balancing of near-time sacrifice vs long-term sustainability. But you need good reasons to pick the side of the scale that historically got less resources invested into it and puts an impetus on your engineering team to adjust to the knock on effects of that disparity while still building a fledgling company.
> Not updating dependencies is the equivalent of never brushing your teeth
That's a strawman: the choice is not between updating and not updating. The choice is between updating on my terms or not.
I recently updated stripe from 2.x.x to 5.x.x in one of the projects. That's several years without updates. Wouldn't it be fun if somebody was forced to update multiple projects every single time stripe ships a new minor version? And if we were to do the true monorepo, at what pace do you think stripe would be updated, if it was their responsibility to update all dependents?
you're conflating management of external dependencies with internal dependencies. Ideally, Stripe is actually as a Library Vendor here and so these are long-lived major versions with well-defined upgrade paths and surface area. Within your company you don't want every team to have to operate as a Library Vendor, and you also want to take advantage of the command economy you operate in to drive changes across the company rapidly.
Also, Amazon went through this whole thing. They have tons of tooling built up around managing different versions of external and internal dependencies and rolling them out in a distributed fashion. They are doing polyrepo at a scale that is unmatched by anyone else. And you know what they've settled on? Teams getting out of sync with the latest versions of dependencies is a Really Bad Thing, and you get barked at by a ton of systems if your software is stale on the order of days/weeks.
> Within your company you don't want every team to have to operate as a Library Vendor
But you want some teams to operate this way. And the best way to do it is by drawing boundaries at the repo level.
This is similar to monolith-services debate. Once monolith gets big enough there's benefit in breaking it down a bit. Technically nothing prevents you from keeping it modular. Except that humans just really suck at it.
> take advantage of the command economy you operate in to drive changes across the company rapidly
Driving changes across the company is a self-serving middle-manager goal. There's a reason why central planning fails at scale every single time it is attempted.
> Teams getting out of sync with the latest versions of dependencies is a Really Bad Thing
It definitely can be a bad thing. But you know what's even worse? Not having the option to get out of sync. If getting out if sync is a problem, polyrepo offers simple tooling to address it.
The assumption that you are making is that polyrepos will spend the vast amount of engineering effort to maintain a stable interface. Paraphrasing Linus: “we never break userspace.”
In practice internal teams don’t have this type of bandwidth. They need to make changes to their implementations to fix bugs, add optimizations, add critical features, and can’t afford backporting patches to the 4 versions floating around the codebase.
Repos work for open source precisely because open source libraries generally don’t have a strong coupling between implementers and users. That’s the exact opposite for internal libraries.
> In practice internal teams don’t have this type of bandwidth
You don't need bandwidth to maintain backward compatibility in polyrepo. As you said yourself, you need loose coupling.
When you are breaking backward compatibility, the amount of bandwidth required to address it is the same in mono- and polyrepos (with some exceptions benefitting polyrepos).
The big difference though is whose bandwidth are we going to spend. Correct me if I'm wrong, my understanding is that at Google it's the responsibility of dependency to update dependents. E.g. if compiler team is breaking the compiler, they are also responsible for fixing all of the code that it compiles.
So you're not developing your package at your own pace, you are limited by company pace. The more popular a compiler is, the slower it is going to be developed. You're slowing down innovation for the sake of predictability. To some degree you can just throw money at the problem, which is why big companies are the only ones who can afford it.
> can’t afford backporting patches to the 4 versions floating around the codebase
Backporting happens in open-source because you don't control all your user's dependencies. Someone can be locked into a specific version of your package through another dependency, and you have no way of forcing them to upgrade. But if we're talking about internal teams, upgrading is always an option, you don't have to backport (but you still have the option, and in some cases it might make business sense).
> open source libraries generally don’t have a strong coupling between implementers and users. That’s the exact opposite for internal libraries.
I disagree. There's always plenty of opportunities for good boundaries in internal libraries.
Though I'll grant you, if you draw bad boundaries, polyrepo will have the problems you're describing. But that's the difference between those two: monorepo is slow and predictable, polyrepo is fast and risky. You can reduce polyrepo risks by hiring better engineers, you can speed up monorepo (to a certain degree) by hiring more engineers.
When there's competition, slow and predictable always loses. Partially that's why I believe Google can't develop any good products in-house: pretty much all their popular products (other than search) are acquisitions.
Not updating dependencies is the equivalent of never brushing your teeth. Yes, you can ship code faster in the short term, but version skew will be a huge pain in the future. A little maintenance every day is preferable to ten root canals in a few years.