Hacker News new | past | comments | ask | show | jobs | submit login

I'm sure their deployment process is way more complex than mine. I'm also sure that unlike me, they have engineers dedicated to managing the complexity. GitHub is a company that has blogged about how easy they have made deployment and how they deploy many times a day. I don't believe there are any compelling reasons why that ease of deployment shouldn't also extend to re-deploying previous versions of their service, so they don't need to leave a bad version up for hours.



I think you're missing the point a bit. Not all changes are simple code deploys. They take time to diagnose, mitigate and fix. This is especially true when you have a lot of services that talk to each other, working on infrastructure that can support so many services and users.


It’s often not as easy as deploy -> immediate problem -> rollback. Problems can take a while to diagnose, or may cause some kind of poisoning that needs to be fixed (eg rebuild a lost or corrupt cache), or be in some part of the system that nobody knew was related (eg maybe someone deployed code that talks to a hitherto-unqueried accounting system and that worked fine at 4pm on Thursday but come 9am Monday it melts).

My point is that in big complex systems sometimes there is not a straight line between cause end effect. Sometimes there’s just effect and you need to work out the cause.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: