This is the Euclidean algorithm for GCD, it's naive and simple. But if you see it more carefully, the commit that says "Optimized performance ..." contains just the lines:
34 var tmp = a;
35 a = Math.max(a, b);
36 b = Math.min(tmp, b);
37 if (a % b === 0) return b;
to avoid the unnecessary repetitions if b is already the GCD
Still, this wouldn't help in the given case of a=1e12+1, b=2. What's the point of those four lines 34-37 -- these would be unnecessary if the code used "%" instead of "-" in the first place.
Edit: Also, if you stick to the original subtraction-based version of the algorithm, why adding that other optimization? That wasn't part of the original algorithm, either. This seems to be double standard.
I do this sort of thing sporadically in my free time too... it's still good to go back and do stuff smarter when you learn better rather than accepting whatever they taught you in CS101 as the end of the story.
Whether its Javascript or any other language, I think writing algorithms is a great way of driving home certain concepts when you are learning algorithms. Its even more fun to use your data structures and algorithms to solve toy problems and see the difference. Obviously, this is not recommended in production where one must use battle-tested code.
If you like doing this sort of thing, there are a million little CS problems like this codewars.com
One cool feature of the site is that you can vote on solutions; this lets you see a lot of neat solutions to problems, and learn a lot about how to write clean code.
This seems high...I just went through a bunch of front end web developer interviews and nearly everyone I was asked some algorithm or data structure question. Maybe it's just NYC or maybe it's where I was applying.
It's likely an artifact of the jobs you're applying for but also consider that they're might be asking those questions now because in the past there were too many applicants who weren't qualified. It's only been fairly recently that front-end development has been taken more seriously as an engineering discipline.
That maybe used to be true, but these days that's absolute incorrect. A lot of non-Web developers are becoming "web developers", because such a large percentage of new software is written for the Web.
Why not try to implement the iterative O(ln(n)) Fibonacci from SICP (Ex 1.19), rather than the old O(n) one? Or the Matrix exponentiation solution (he'd probably want to implement iterative matrix exponentiation too... doesn't look like he knows clearly how to do this)?
https://en.wikipedia.org/wiki/Fibonacci_number#Matrix_form
Considering that the Fibonacci sequence is exponential, and hence the Nth Fibonacci number contains O(n) digits (well, in any sane base at least), you cannot compute the Nth Fibonacci number in under O(n) time.
Given the guys programming in Javascript and not using arbitrary precision arithmetic, he's always using 64 bits and representing his numbers as floating point, so I don't think you are reasoning correctly about the asympototic complexity here. I mean, if you really want to argue this, then to quote Leibniz: "Calculemus!"
The javascript solution presented does everything in O(n) arithmetic operations, and the SICP way does it in O(ln(n)) arithmetic operations (actually, there are a lot of ways to do it in O(ln(n)) operations).
If we're going to stick to a fixed precision numbers, and we're just out to score maximum internet points, you can do it in O(1) by just computing out a table in advance, and for not much storage space since the series grows exponentially.
I mean, if we're going to screw around with the Os let's do it properly.
I noticed the following code:
This uses repeated subtraction (a -= b) to implement division with remainder (a %= b); imagine a=1e12+1, b=2.Please don't do this in your code. If there is a built it instruction for division with remainder, don't reimplement it poorly in javascript.