Hacker News new | past | comments | ask | show | jobs | submit login

There are so many possible directions that somebody might want to go, though. If you're ultimately interested in reinforcement learning, you don't immediately need to understand EM in order to "get" Q learning. Or to work on discrete problems related to computer vision, having a basis in linear programming and algorithms will get you further than logistic regression or SVMs.

I'll agree that anybody doing machine learning should understand the basic idea of maximum likelihood learning (e.g., logistic regression). But how far do you go beyond that? This is really the heart of the issue, I think. Yes, there are a ton of things that are really useful to know, and any machine learning student should learn them at some point. If you're trying to do an interesting project that teaches you something about research (and thus looks good on a graduate school application), though, what is the best use of your time?




You're right in that you need direction. Each subclass of machine learning has some elementary algorithms worth learning:

For unsupervised learning or probabilistic latent models, the EM algorithm, Metropolis-Hastings sampling, variational methods (variation EM, variational Bayes, expectation propagation).

For supervised learning, linear regression, perceptrons, support vector machines.

For reinforcement learning, Q-learning, E^3, Rmax.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: