Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Depends on the database - I know that CockroachDB supports rolling upgrades with zero downtime, as it is built with a multi-primary architecture.

For PostgresQL or MySQL/MariaDB, your options are more limited. Here are two that come to mind, there may be more:

# The "Dual Writer" approach

1. Spin up a new database cluster on the new version. 2. Get all your data into it (including dual writes to both the old and new version). 3. Once you're confident that the new version is 100% up to date, switch to using it as your primary database. 4. Shut down the old cluster.

# The eventually consistent approach

1. Put a queue in front of each service for writes, where each service of your system has its own database. 2. When you need to upgrade the database, stop consuming from the queue, upgrade in place (bringing the DB down temporarily) and resume consumption once things are back online. 3. No service can directly read from another service's database. Eventually consistent caches/projections service reads during normal service operation and during the upgrade.

A system like this is more flexible, but suffers from stale reads or temporary service degradation.



Dual writing has huge downsides: namely you're now moving consistency into the application, and it's almost guaranteed that the databases won't match in any interesting application.


I'd think using built-in replication (e.g. PostgreSQL 'logical replication') for 'dual writing' should mostly avoid inconsistencies between the two versions of the DB, no?


Yes, though I've only ever seen people use the term "dual writing" to refer to something at a higher-than-DB-level.

The way I've done this involves logical replication also: https://news.ycombinator.com/item?id=31087197


Plus that you need to architect this yourself, with all the black magic involved to not mess something up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: