Hacker News new | past | comments | ask | show | jobs | submit login

Why something like this requires a downtime? Could not both databases be utilised at the same time while backfilling older entries in parallel and once all data migrated flip a feature switch to stop writing to old db? Reades can be switched with a flip too, or some smart logic to check both databases if data is not found. This approach is hard to implement and overall migration would take longer, but considering many companies depending on Gitlab SC, this should be the preferred approach I think.



Swallowing an hour or two of pre-scheduled downtime can be worth it if you can significantly reduce the complexity and risks associated with a migration, and get it over and done with sooner. Particularly if you're already hurting from whatever it is that you're fixing, and it's about to cause you unscheduled downtime.


GitLab team member here.

The comment [0] provides more insights into the planning and downtime requirements. The epic itself may be helpful too, it is linked from the blog post.

[0] https://gitlab.com/groups/gitlab-org/-/epics/7791#note_94102...


It is always possible but it can be very much more complex and if a mistake is made you could end up with much more downtime or data loss.

Iā€™m fine with planned downtime and usually not willing to pay for service providers to do everything for absolute minimum downtime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: