Hacker News new | past | comments | ask | show | jobs | submit login

Which is kind of ridiculous. If your CI breaks because GitHub is down, it means it's not caching dependencies locally, but keeps re-downloading them every time it runs (e.g. every commit), generating tons of waste and unnecessary load on the hosting service.

Or, to put it bluntly, if your CI works like this, it's contributing to climate change.




I think you are wrong. Our CI infra caches all dependencies, but it depends on github for new internal code pushes (kind of the point). If github is not sending events, CI doesnt kick.

Youre ignoring half of the problem. If you dont receive events from github because they are down, your CI doesnt work either -- dependency caching doesnt matter at that point.


That's assuming you're putting your own organization's code on GitHub. Then of course if GitHub doesn't work, neither does the CI that's hooked to it. This is a separate topic.


Or it's something using something like cargo (rust's package manager) - and checking if any dependencies have a newer version by checking the package registry (which is stored on github for no apparent reason).


That’s an excellent point. How can you tell if the CI system uses caching, other than waiting for a github outage to notice something broke?


Many CI systems provide a build log, do they not? Look for a “git fetch” in it instead of a “git clone”.


Check in documentation, or when in doubt, run a job, redirect GitHub to 127.0.0.1 in /etc/hosts on the CI server, and run that job again.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: