Hacker News new | past | comments | ask | show | jobs | submit login

On premises git hosting with Gittea or GitLab with mirrors to GitHub seems like a smart idea going forward.



But then you have to host it and maintain it. It's a slippery slope. How many 3rd party services do you in-house with hosted OS software. Pretty soon you're spending a huge chunk of your time doing ops work. And, where do you host it? On AWS, which can also go down, or on hardware hosted at your office. With on premise hosting, now you're in the hardware game too.


Well, your code is your companies' IP. It might be prudent to have that IP on-prem(eg Gitlab). Every business has varying requirements, but I've yet to be employed at one using 3rd party hosting for source control without an on-site mirror. Disclaimer: my employment has been at megacorps so far, smaller shops may not do this.


with git, you have a mirror on every single developer machine (kinda... depending on what you consider "all" of the code).

We also have a full copy on our CI server (which is hosted on another service, so still not "in-house"), and we have a copy of at least the master branch (and all it's history) on our production boxes (which also gets pulled into our whole backup system there).

In a disaster recovery scenario, that's more than enough for me.

Sure, if github blinked out of existence we would probably be at a fraction of our normal productivity for a while until we fully recover everything and find new workflows, but the risk vs reward there is well within the margins of what I'd consider acceptable for a company like us.


> In a disaster recovery scenario

That's not really enough. You need more than "all the code is around here somewhere" you need a plan with specific steps that have been tested.


Honestly for git, it probably is enough. We're talking about someone deleting your github account or github closing overnight with no warning (it's been acquired by Microsoft, so it's much more likely that the company you're working for will shutter). It should take ~30 minutes to push your repo to another provider including looking up instructions. Unlike database backups, there is rarely any data loss and any data loss should be recoverable. It's also not client-facing, but is a temporary problem similar to wifi going down at your office. An inconvenience and hassle, yes. Long-term problem, no.

Furthermore, the problem with these disaster scenarios are that there are much more dangerous problems than your account being deleted. Someone with admin access, could insert a back door or sell your source code to someone else. That's honestly scarier.


That's probably the case for valley-style startups where the whole team can fit in a room and they all hack on the same handful of repos, but most "enterprise" customers will have hundreds of repos with not necessarily anybody hacking on most of them at any given moment. It's very good policy for such organizations to have a plan in place to "break glass in case Github is down" with local mirroring of all data and a tested process for doing deploys without Github.


We have exactly that... in our DR plan, there is a section for how to cope with the 3rd party source control provider being unavailable/compromised/etc. Update DNS for the equivalent of "upstream-git.foo.com" to an internal address, and continue business as usual.

It's like you said, smaller shops probably think it's over-planning and overkill, but we do indeed have 100's of projects that are "mission critical", that might not have been touched in 1+ years.


>but I've yet to be employed at one using 3rd party hosting for source control without an on-site mirror.

For me it's actually the opposite, except for one company that had only an internal SCM and no cloud stuff.


I'm not sure how hard it is to maintain some self hosted apps. It's just set up and forget. It doesn't randomly change interface or license unless you upgrade it as well.

I've used AWS for 10 years but for last 5 years, I've never seen it just go down randomly and even if it did, you have the room to redeploy with some clicks (assuming you have your data backed up regularly) instead of waiting for the uncontrollable.

You seem to take the ops work a bit overly.


You know, the time required to host Gittea (or Gogs) is basically nil (you'll have to set your environment up for Github too). And by maintaining it, you mean making sure it's online and taking backups? Because I don't see how you can save any time on those by using Github.

Where you host it is of less relevance, because you can simply take your backup and server script and run them at any different provider any time you want.


I am currently leading the on premise hosting of a Gitlab instance and I can say this with ease: I have been spending 1 day of each week for ops work. Let it be helping people, database adjustments, admin stuff, hardware checks, etc.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: