Hacker News new | past | comments | ask | show | jobs | submit login

>Let's say... you want... 1%... then 5%, then 25%

Honestly though, it feels like the argument you are making here is that it's impossible in general to incrementally roll out database changes? Sure, it's definitely not as simple or easy as spinning up new cloud application instances and diverting http requests, but it's not impossible or infeasible by any means. The problem with databases is that the best solutions are always so subjective and context specific. It's very hard to write out simple, easy-to-understand, "rules," or "concepts," or whatever. Every answer, to every hard problem, is "it depends."

What are we rolling out here? Does the sproc write data or read data? Both? From the same table/schema/server/etc? Does it move data around? How much data? Do you already have read-replicas? How is data replicated to those? Do you have any parallel-write setups with eventual consistency? What parts of the higher level application/platform utilize this part of the database that we want to change? Can we spin up a new DB instance, with the new sproc, and establish data replication/change resolution with the legacy system as we are rolling out the change? Should we move everything to the new system and set up the compat piece as a _forward_ compat on the legacy side instead? etc, etc, etc.

>But actually pausing execution and inspecting data? Good luck.

There are plenty of ways to do this. Copy-paste the code of the proc into your DB IDE of choice, and execute it line by line. If the proc is using temp tables or @variables or anything else you should be easily able to instantiate those locally too and see what the values look like. Use a transaction and rollback and you can run this stuff over and over on prod if you really want to be sure of what it does. As long as you aren't locking stuff in the DB improperly.

There's also the idea of writing sprocs in a more testable/verifiable way. Maybe you have progress tracking tables with status id columns and timestamps. Maybe you split up a long sproc into phases which each write their outputs into "intermediate tables" in some sort of "processing" schema. Maybe you write things to files somewhere to be consumed by some federated monitoring system. Idk, I could think of random ideas all day here. A lot of them probably sound dumb at face value, but I really don't think any of these goals sound infeasible.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: