Instead, learn decision trees and more importantly enough statistics so you aren't dangerous.
Do you know what the central limit theorem is and why it is important? Can you do 5-fold cross validation on a random forest model in your choice of tool?
Fine, now you are ready to do deep learning stuff.
The reason I say not to do neural networks first is because they aren't very effective with small amounts of data. When you are starting out you want to be able to iterate quickly and learn, not wait for hours for a NN to train and then be unsure why it isn't working.
I don't think it's a good strategy to discourage people from diving right in. There are many courses and books out there that are suitable even for a beginner who wants to learn about NN.
Of course it's important to get a broad horizon eventually but starting with the theory without the applications is not how most humans learn best. Learning by doing is.
The problem with diving into neural networks is that they are slow to train (with large amounts of data anyway), and difficult to debug. This means it isn't really a great place to start.
There are some good, short, MOOC courses on statistics and probability on Coursera these days. I've been working my way through the Duke sequence with Mine Çetinkaya-Rundel and have found them very helpful. The courses correspond with the material in this OpenIntro text:
Instead, learn decision trees and more importantly enough statistics so you aren't dangerous.
Do you know what the central limit theorem is and why it is important? Can you do 5-fold cross validation on a random forest model in your choice of tool?
Fine, now you are ready to do deep learning stuff.
The reason I say not to do neural networks first is because they aren't very effective with small amounts of data. When you are starting out you want to be able to iterate quickly and learn, not wait for hours for a NN to train and then be unsure why it isn't working.