There's been data science/machine learning bootcamps around for awhile (Galvanize/Metis being common examples in San Francisco), but apparently job placement is not in a good place (as with normal bootcamps).
Indeed Machine Learning/Deep Learning has become much more accessible thanks to the number of free guides such as this. But that means data science job placement will become more difficult as competition increases, with more gatekeeping/requirements (e.g. Masters/Ph.Ds)
The issues I've heard from a few people in hiring is that there is a surplus of junior data scientists from these camps and a shortage of senior data scientists to manage them. Problems not dissimilar to tech hiring in general, but companies need a lot more SWEs than data scientists.
Most companies are going to utilise ML to some extent. Once technology and tooling improves they'll need boots on the ground engineers and not labs with R&D teams
Honestly, unless these ML bootcamps are extensive courses on calculus, linear algebra, and statistics and not just "Here's k-means. Memorize it" I doubt they'll harm the market for grad school educated data scientists.
I'd like to offer a counterpoint. I attended one of the machine learning bootcamp mentioned above, and it was transformative for me. I got hired within a month, doubled my salary to over 100k, and landed a job that I enjoy and find intellectually stimulating. All this while having little to no technical experience (only math I took in college was intro to stats, and my pre-bootcamp career was in a non-technical capacity).
I completely understand why there is such a stigma around bootcamps. Nobody can deny that they don't afford the same depth that you'd get at a "real" program. But they can be amazing for career switchers like me, who had no real direction in college. Don't look down your nose at them.
Too bad ML as a service is already largely cornered by $FANG
Wat?
Neither Facebook nor Netflix offer outsiders access to their ML platform, and you completely forgot Azure, which IMHO has the most mature offering of the big 3 in this space.
You cannot learn machine learning or deep learning in a few months. You can learn to copy what these guides do, but if you want to do something slightly different you will feel you know nothing (because you actually probably don't know anything about the maths behind why the things works, so when you want to change them you don't know how)
I don't deny that knowing the math / theory is useful, but wonder if we sometimes overestimate the degree to which it is essential. For example, backprop with SGD is a good foundation for many, many, many applications of NN's, and pre-built implementations exist that let you use the technique without understanding the details of the math. And with those tools, you can experiment with many different combinations of features, different architectures, etc.
Of course understanding the theory will be helpful in knowing which architectures are most likely to be productive and what-not, but this whole field is very empirical anyway. So if your experimenting is a little less guided my intuition rooted in theory, that's not exactly the end of the world.
I wasn't talking about back propagation. But sometimes you need to change the loss function, or the shape of the network. Or combine two models. Back propagation is the same for those examples, but not other math stuff.
The only thing that makes you think it is easy is because you are just copying what others have been doing and you don't change anything. Try to go beyond that and you will change your mind quite quickly.
Reason you need an education in this theory is twofold.
How to fix something that is broken in limited time?
How to assure this model is reliable?
To confidently answer this from a place of reason derived from theory is going to be the real value.
Sure, but it's a continuum, not a binary dichotomy. Just like you can do more with your car if you have degrees in mechanical engineering and fluid dynamics, but a person with nothing but a high-school diploma can upgrade a camshaft.
The point is, you can do a lot of very useful things with ML, without needing the entirety of the theoretical underpinnings. Of course you can't do everything but not everybody needs to be able to do everything.
So as I said you can copy what others do. That is fine, but you don't k ow deep learning, you know how to apply it based on examples, which is is fine for a lot of things.