Hacker News new | past | comments | ask | show | jobs | submit login
Algorithms.js – Atwood's Law applied to CS101 (github.com/felipernb)
59 points by Envec83 on May 30, 2014 | hide | past | favorite | 34 comments



The commit message "Optimized performance of the GCD algorithm" piqued my interest. It's GCD, what's there to optimize?

I noticed the following code:

    while (b !== 0) { if (a > b) { a -= b; } else { b -= a; } }
This uses repeated subtraction (a -= b) to implement division with remainder (a %= b); imagine a=1e12+1, b=2.

Please don't do this in your code. If there is a built it instruction for division with remainder, don't reimplement it poorly in javascript.


This is the Euclidean algorithm for GCD, it's naive and simple. But if you see it more carefully, the commit that says "Optimized performance ..." contains just the lines:

  34      var tmp = a;
  35	  a = Math.max(a, b);
  36	  b = Math.min(tmp, b);
  37	  if (a % b === 0) return b;
to avoid the unnecessary repetitions if b is already the GCD


Still, this wouldn't help in the given case of a=1e12+1, b=2. What's the point of those four lines 34-37 -- these would be unnecessary if the code used "%" instead of "-" in the first place.

Edit: Also, if you stick to the original subtraction-based version of the algorithm, why adding that other optimization? That wasn't part of the original algorithm, either. This seems to be double standard.


Still this is an issue with the algorithm, not the implementation.

http://en.wikipedia.org/wiki/Euclidean_algorithm


I think you misunderstood the Euclidean algorithm. The basic iteration is of the form

    gcd(a, b) = gcd(b, a mod b),
and there is no need to compute mod with anything other than %.


You're right. This is the subtraction-based implementation, the division-based is the original and better one :)

https://github.com/felipernb/algorithms.js/commit/e5a04f9ad0...


There are a couple of GCD algorithms that have better asymptotic performance than Euclid's (an old one is due to Knuth): http://www.ams.org/journals/mcom/2008-77-261/S0025-5718-07-0...

I'm surprised the author didn't implement the binary gcd, however, although it has the same big-O as Euclid: https://en.wikipedia.org/wiki/Binary_GCD_algorithm


I've been doing this sporadically and in my free time, just for fun, so there are a lot of fundamental things that haven't been covered there. :)


I do this sort of thing sporadically in my free time too... it's still good to go back and do stuff smarter when you learn better rather than accepting whatever they taught you in CS101 as the end of the story.


This reminds me of David Eppstein's fantastic PADS library [1] of algorithms implemented in Python.

[1]: http://www.ics.uci.edu/~eppstein/PADS/


Whether its Javascript or any other language, I think writing algorithms is a great way of driving home certain concepts when you are learning algorithms. Its even more fun to use your data structures and algorithms to solve toy problems and see the difference. Obviously, this is not recommended in production where one must use battle-tested code.

I did a similar exercise when I was doing a course on Coursera in Python - https://github.com/prakhar1989/Algorithms


If you like doing this sort of thing, there are a million little CS problems like this codewars.com

One cool feature of the site is that you can vote on solutions; this lets you see a lot of neat solutions to problems, and learn a lot about how to write clean code.


Only the one who reinvents the wheel learns how it really works. :)


I've never invented a wheel but I'm pretty sure I know how it works


Why is a Javascript implementation of some very basic algorithms interesting or noteworthy?...


See: http://blog.codinghorror.com/the-principle-of-least-power/

"Atwood's Law: any application that can be written in JavaScript, will eventually be written in JavaScript."

It's kind of a joke thing.


Exactly! Also, many JS devs are clueless about CS101 basic things, it's a way to maybe make them a bit interested.

From the wiki (https://github.com/felipernb/algorithms.js/wiki):

  The intention is to blend academic concepts with the
  JavaScript bits of insanity, applying Atwood's Law to CS101.


While I may not be clueless, post like this do make me more interested.


What percentage would you say are clueless about CS101 basic things?


I would estimate 75% of Web developers are "clueless" about algorithms and data structures (although that's a fairly strong word).


Do you include in your estimate those who may currently be JS developers not by choice but due to the vogue of web apps?


This seems high...I just went through a bunch of front end web developer interviews and nearly everyone I was asked some algorithm or data structure question. Maybe it's just NYC or maybe it's where I was applying.


It's likely an artifact of the jobs you're applying for but also consider that they're might be asking those questions now because in the past there were too many applicants who weren't qualified. It's only been fairly recently that front-end development has been taken more seriously as an engineering discipline.


I get the feeling these days most JS developers at least professional ones aren't totally clueless about algorithms and data structures.


Its sounds correct.


That maybe used to be true, but these days that's absolute incorrect. A lot of non-Web developers are becoming "web developers", because such a large percentage of new software is written for the Web.


I think it's become some sort of elaborate joke.


Why not try to implement the iterative O(ln(n)) Fibonacci from SICP (Ex 1.19), rather than the old O(n) one? Or the Matrix exponentiation solution (he'd probably want to implement iterative matrix exponentiation too... doesn't look like he knows clearly how to do this)? https://en.wikipedia.org/wiki/Fibonacci_number#Matrix_form


Considering that the Fibonacci sequence is exponential, and hence the Nth Fibonacci number contains O(n) digits (well, in any sane base at least), you cannot compute the Nth Fibonacci number in under O(n) time.


Given the guys programming in Javascript and not using arbitrary precision arithmetic, he's always using 64 bits and representing his numbers as floating point, so I don't think you are reasoning correctly about the asympototic complexity here. I mean, if you really want to argue this, then to quote Leibniz: "Calculemus!"

The javascript solution presented does everything in O(n) arithmetic operations, and the SICP way does it in O(ln(n)) arithmetic operations (actually, there are a lot of ways to do it in O(ln(n)) operations).


If we're going to stick to a fixed precision numbers, and we're just out to score maximum internet points, you can do it in O(1) by just computing out a table in advance, and for not much storage space since the series grows exponentially.

I mean, if we're going to screw around with the Os let's do it properly.


Why is this even on hn? Are you kidding me? I have some college projects too you know...


Great, let's see them


I see you are new to HN. Basically any old code re-implemented in is JS suddenly considered cool.

Example:

P: Look I have a spinning cube in 3d at 30 FPS!

HN: OMG, awesome, never done before!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: