There isn't a scientific field where every single paper is groundbreaking. It's a Brownian motion of small incremental innovations, until eventually we stumble upon something big (like deep learning). In no way is machine learning unique in this.
Except ... while machine learning is great, has made important and significant strides, it's not a yet science. It involves essentially a series of sophisticated, mathematically informed recipes for feeding data to giant algorithms and having them create something useful (maybe very useful but still).
An analogy from a couple years ago is bridge building before physics. You accumulate rules of thumb, you get a vague understand what works. You get better. But you aren't producing a systematic field.
And that implies merely advancement isn't necessarily progress (which isn't to say there's no progress but building larger SOTA isn't that as the article notes).
I would agree that as a science machine learning is in its infancy. There's statistical learning theory, but arguably that's a fairly small subfield, if not a separate field completely. ML has largely been an engineering enterprise so far, but in my experience the interest in developing underlying theory has ramped up in recent years.
What we should be careful about is to not be too strict when defining "science". In my view the goal of any scientific field is to build understanding that is useful for predicting outcomes of experiments. Now, this understanding could be defined mathematically, but it doesn't have to be. Don't see why building heuristics can't be a part of this, assuming such heuristics reliably predict outcomes of experiments.
>ing is great, has made important and significant strides, it's not a yet science. It involves essentially a series of sophisticated, mathematically informed recipes for feeding data to giant algorithms and having them create something useful (maybe very useful but still).
So... scientists need to do scienceing until it is. This is what happened with Biology over the last 50 years after 2000 years of pinning things on cards and putting them in draws.
My original point was in response to OP's saying that not every paper needs to be brilliant. The thing about "normal science" is that begins with more or less verified theories and extends their theory and practice - until they fail, reach the edge of the theories, and the science community must search for alternative hypotheses. Machine learning is more working with a bunch of practices, rules of thumb and suggestions for doing practices. These can't really fail since they're roughly repeating themselves in different domains. The problem is that can be done forever and you never need new theories or approaches.
So I'd claim we're really at the situation right now, at the exploring new ideas, the "crisis of science" phase where essentially people have to start brainstorming (and not all ideas are good here either but they need to be somewhat original).
All this is using Thomas Kuhn's Structure Of Scientific Revolutions model very roughly.
> An analogy from a couple years ago is bridge building before physics. You accumulate rules of thumb, you get a vague understand what works. You get better. But you aren't producing a systematic field.
This would be more like Kuhn's pre-paradigmatic (pre-scientific) activity, than moving from one scientific paradigm to another ?
Yeah - SVM were considered to be optimal, then they got smashed by deep networks, now people keep bringing them back and saying "they are just as good" and yet deep networks keep being used to do all the breakthrough work.
> An analogy from a couple years ago is bridge building before physics. You accumulate rules of thumb, you get a vague understand what works. You get better. But you aren't producing a systematic field.
Sounds like science to me; systematic recording of (perceived) cause and effect.
Arguably, deep learning and cell biology both appear like equal parts pure wizardry and flailing in the dark, but maybe that’s just because we haven’t gathered enough pieces yet, and not necessarily because people are doing the wrong things, thus failing to advance?
Except ... while machine learning is great, has made important and significant strides, it's not a yet science. It involves essentially a series of sophisticated, mathematically informed recipes for feeding data to giant algorithms and having them create something useful (maybe very useful but still).
An analogy from a couple years ago is bridge building before physics. You accumulate rules of thumb, you get a vague understand what works. You get better. But you aren't producing a systematic field.
And that implies merely advancement isn't necessarily progress (which isn't to say there's no progress but building larger SOTA isn't that as the article notes).