Hacker News new | past | comments | ask | show | jobs | submit login

> Rolling out API changes concomitantly with downstream changes to the documentation or the OpenAPI spec.

> Introducing feature-level changes and the blog post announcing those changes.

These are horrible reasons to use a monorepo. Commits are not units of deployment. Even if you're pushing every system to prod on every commit, you'd still basically always want to make the changes incrementally, system by system, and with a sensibly sequenced rollout plan rather.

To take one of the examples above, why would you ever have the code implementing a feature and an annoucement blog post in the same commit? The feature might not work correctly. You'd want to be able to test it in a staging environment first, right? Or if you don't have staging, be able to run it in prod behind a feature flag gated to only test users, or as a dark launch, or something to verify that the feature is working before letting real users at it and having it crash your systems, cause data corruption, or some other critical problem that would necessitate a rollback. But none of this pre-testing is possible if the code changes are really being done in the same commit as the public announcment.

And talking of rolling back... When you revert the code changes that are misbehaving, what are you doing with the blog post? Unpublish it? Or do some kind of a dirty partial rollback that just reverts the code and leaves the blog post in place?

The same goes for any kind of cross-project change[0], some of which appear more compelling on the surface than the "code and blog post in one" use case (e.g. refactoring an API by changing the interface and callers at the same time). Monorepos allow for making such changes atomically, but you'd quickly find out that it's a bad idea. There are great reasons to use monorepos, but this is not it.

[0] I wrote more about this a couple of years back. https://www.snellman.net/blog/archive/2021-07-21-monorepo-at...




> you'd still basically always want to make the changes incrementally, system by system, and with a sensibly sequenced rollout plan rather.

Depends. It's significantly faster to deploy everything at the same time and accept that unlucky requests might end up in a weird state than to safely sequence changes.

In SRE phrasing, I'm choosing to spend our error budget to maximize change velocity by giving up on compatibility during deploys by skipping a multi-stage rollout plan. In return, I can condense a rollout to a single commit and deploy. A 99.9% availability target yields up to 86 seconds per day to pretend that deploys are "atomic".


Did you ever had to rollback some unlucky changes? Specifically rolling back, not fixing it frantically by several layers of fixes on top the buggy deploy?


> Commits are not units of deployment

Thank you for this statement, I will write it in all caps some very visible place.


I disagree with this sentiment




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: