> You don't really get a lot of the gains of microservices if you're using a monorepo
I think the two are completely orthogonal.
At Google, when you check in code, it tests against things it could have broken. Not all tests in the system. For most services, that means just testing the service. For infrastructure code, then you have to test many services.
It seems things have changed since I last looked at how Google does deployments. Back then, every test ran on every checkin to the mainline, and all code was checked into the mainline. It even talks about that in the Google SRE book.
I think you were misunderstanding something. Why would every code change cause a compile and test across the entire company? That is to say; not only does that not scale, it’s totally unnecessary*. Only the downstream consumers of a change are rebuilt and tested, like you’d expect (see: bazel and the monstrous makefile before that). In this sense, the fact that Google uses a monorepo is mostly an implementation detail. It has some impact on the company’s workflows and tooling, but not its software architecture.
* unless you’re changing a very common dependency, of course, and Google has tooling for this.
Hermetic builds allow you to cache your builds and your test executions in such a way that running all builds and all tests for every commit is indistinguishable from executing only the builds and tests that you could have affected
Even if that were true (and it is not true), the non-dependent tests would finish in zero time because the results are cached and hashed by dependency tree.
I think the two are completely orthogonal.
At Google, when you check in code, it tests against things it could have broken. Not all tests in the system. For most services, that means just testing the service. For infrastructure code, then you have to test many services.