Hacker News new | past | comments | ask | show | jobs | submit login

Another less-recognised point is that in industry, you also need to ask "how can I maintain this?" and "what can go wrong with my algorithm?".

In one use case, a "blip" in your algorithm might mean showing the wrong kind of advertisement to a user. Not great, but ultimately no big deal. In another, it might mean automatically buying billions of dollars' worth of pumpkin futures (cf. Knight capital).

In the latter case you need a much greater penalty on model complexity, and much more emphasis on interpretability.




While I agree with your point (and often use this in interview questions) that wasn't what caused the Knight Capital problem.

That was bad software engineering and deployment practices, and had nothing to do with interprability of the model (actually it had little to do with the model at all.) They repurposed a feature toggle, then misdeployed the code: http://pythonsweetness.tumblr.com/post/64740079543/how-to-lo...

I understand that this was an example, but I'm sure someone will misread it as what happened in that case.


Yep - meant it as an example of a general catastrophic software glitch rather than a ML algorithm gone haywire.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: