that is a pretty seriously flawed report, and was even when it was released almost two years ago. even at the time, nearly everything in both the "Git Advantages" and "Mercurial Advantages" sections are either plain wrong or misunderstood. either ignore the report entirely, or read the comments with point out most of the issues.
Is it? I saw a couple of comments that rightfully pointed out some minor things the analysis ignored like git's reflog and denyNonFastforwards, but it seems reasonable to me beyond that. What other issues are there with the report?
Hopefully this means git will be coming to google code! (one of the big reasons they rejected it in favor of hg was that git was so inefficient over http before)
On a similar issue requesting darcs support (which was closed WontFix), a google engineer wrote:
>Supporting Subversion and Mercurial is a lot of effort for our small team.
>Adding another version control system is a lot of work and darcs would not be
>our first choice.
I read that as meaning Git would be next in line, but they are not in any hurry to add it. Personally, I'd rather they concentrated on really making what they do support shine. You can always host the source on git(hub|orious) and add a Sources wiki page that points there.
Awesome stuff. I love how Git just keeps getting better.
Funny (to me) aside:
"Grack [the Rack-based Git Smart HTTP process] is about half as fast as the Apache version for simple ref-listing stuff, but we’re talking 10ths of a second."
When optimizing my web app, reducing request time by 100s of milliseconds (er, I mean, "10ths of a second") would be a monumental occasion. I dance around the room when that happens. Am I wrong in thinking "10ths of a second" might be a dramatic difference for a place like GitHub?
you have to remember that this is not serving web pages - this is fetching and pushing data, which generally takes several seconds or even minutes. Having a clone take 45 seconds isn't really a big deal, but nobody would wait for a webpage load that long - it's a very different beast.
The overhead of a few hundred milliseconds is unnoticeable in almost all cases.
The user experience of pulling a git repository isn't really comparable to building a website. Also, the effect of a slow web server compounds with each additional resource on the page - as far as I can tell, the added efficiency of the new HTTP code tries to combat exactly that repeated back-and-forth.
That's interesting - I believe it is used to continue partial downloads in dumb http mode. Technically, you can figure out the bytes needed out of the packfile from the index that is downloaded, so I suppose it is possible. However, it would still be way slower than smart-http, because you would have to walk the refs one by one on the client side - one GET, figure out the next object, another GET, etc. With smart-http, you do a short back and forth until the server can figure out the entire list of objects you need and then build a custom packfile for you.
This is awesome. Hopefully this also makes Git easier to use on shared hosting, where setting up public git daemon or giving out ssh accounts would be out of question.