I have to go ahead and completely disagree with you there. It's certainly more complex under the hood but:
A) Education has never been this accessible - see https://www.coursera.org/, youtube, MOOCs, blog posts etc. which did not exist anywhere for free even 10 years ago
B) APIs and abstractions make a lot of this quite accessible (e.g. AWS, tensorflow etc.), yes these are "magical" APIs, but you could make the same argument regarding a C compiler going to binary, all the way down to logic gates and electrical pulses
33 is young in terms of education, I would highly doubt you're progressing slowly due to your age, probably more your attitude that is holding you back.
I think your points are correct but there's another which you might be disregarding and which is causing GP poster's feelings: The volume of knowledge to be learned if you want to do anything meaningful from anywhere near 'first principles' is orders of magnitude greater than it used to be. If you just want to be cutting edge using a "magical API" then sure, download Keras or TensorFlow and play with some DNNs. But if you want to understand everything you're doing at the theory level then you've got to learn so much more than you did back in the 90s.
Thanks! That's exactly what I mean. I don't want to use magical API and "just play with data". I really want to be able to understand from ground up.
It doesn't mean I have to read every single line of Tensorflow but being able to do that when it's needed. So that such tools won't be magical black box for me.
That makes sense, but you have to draw a line somewhere - you can't possibly know everything from the ground up. You'd have to start with particle physics, atoms, molecules, to even get to the basis of electricity - it's impossible for one person to know all of this.
Ground up knowledge is difficult to obtain in any field. How long do you think it would take you to get a complete understanding of a modern car from he ground up?
The opacity between implementation and understanding is large here and many fields. It depends where you want to contribute. I can build (i.e assemble) a computer. I could learn to build a small basic computer out of transistors and logic gates, etc. Theres a difference between a technician, an engineer and an inventor. To be an inventor takes a lot of work and experimentation probably proportional to the novelty of an invention.
Not to overdo analogies but you dont need to rebuild your own internal combustion engine in a unique way to drive a car or to contribute improvements to a car. The more you understand how and why tensorflow works the more you can do with it. It depends whether you want to build on top of that platform and use it, or build on the concepts for something else.
"Ground up" might be the wrong term here. I don't have right words either, but I feel GP is talking about that level between full knowledge and the "I have no idea what I am doing" level of downloading models from Kaggle, stuffing them into TensorFlow and calling yourself a "Deep Learning expert".
Even though I lack the name for that level, here's how I would describe in qualitative terms some of its attributes:
- Knowing the basic lay of the land all the way down. That is, at least knowing most of the black boxes and what they do, even if you don't exactly know how they do it.
- Being able to solve your own problems, instead of running around like a headless chicken every time you hit a speed bump in your work.
- Being able to reason from that first-ish principles. You're able to sketch solutions within the scope of the extended domain, and as you begin implementing it and need to understand various blackboxes in more depth, the basic shape of your solution isn't usually invalidated by gained knowledge.
I disagree that car design is a stable field. Tesla is selling a radically different car design. All car designers have to face the dawn of self-driving cars.
In every field the total knowledge set is always increasing, which is both empowering, because we stand on the shoulders of giants, and diminishing, because there is less low-hanging fruit. There is always more low-hanging fruit though, the trick is to see it hanging there. ML is a wonderful opportunity because the magical api’s can do far more than they’re currently used for.
> APIs and abstractions make a lot of this quite accessible (e.g. AWS, tensorflow etc.), yes these are "magical" APIs, but you could make the same argument regarding a C compiler going to binary, all the way down to logic gates and electrical pulses
That's a really interesting analogy, I'm wondering what other think about it?
And does it really make a difference? I don't understand compilers, but it still took me a long time to understand how to write correct input for a compiler, and debug the output.
A) Education has never been this accessible - see https://www.coursera.org/, youtube, MOOCs, blog posts etc. which did not exist anywhere for free even 10 years ago
B) APIs and abstractions make a lot of this quite accessible (e.g. AWS, tensorflow etc.), yes these are "magical" APIs, but you could make the same argument regarding a C compiler going to binary, all the way down to logic gates and electrical pulses
33 is young in terms of education, I would highly doubt you're progressing slowly due to your age, probably more your attitude that is holding you back.