Hacker News new | past | comments | ask | show | jobs | submit login

That 20x is hardly just because of numpy overhead, read the answer more carefully:

> optimization by precomputing F values

> I also optimize by skipping the rest of the convolution if the first result isn't zero.

If you don't measure implementations of the same algorithm, you hardly have a fair language/library benchmark.




Measuring implementations of the same algorithm is how you benchmark algorithms, not a a language as a whole. If a language or library allows for enhancements based on it's strengths (in this case, the ability to code in early exits), those are perfectly valid in benchmarking languages and libraries.

Put another way, one of the benefits of Python (and drawbacks of using an external library) is that you have more control over the algorithm and exactly what it does.

That said, a sample size of one will hardly give you an accurate picture.


"Measuring implementations of the same algorithm is how you benchmark" that one algorithm.

"Measuring implementations of [different] algorithms is how you benchmark algorithms"


Yes, and in this case, we can think of pure Python and it's capabilities as one "algorithm", and calling out to a library as another.


That is probably not what readers of your comment understand by the word algorithm.


You're right, I wasn't careful. Without precomputing F, the speedup is only around 7x.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: