To be fair, neural program synthesis has been an active area of research for a while and it doesn't seem that it's going to ever take off for good, like machine vision or NLP.
But what Chollet says is still the case. Machine-learning a mapping from arbitrary specifications to programs is many, many times more difficult than classification. Unless someone comes up with a completely new architecture that is for neural program synthesis what CNNs are for vision and LSTMs for sequence learning, and then some, then there's not going to be any big advances in the field.
Source: I study algorithms that learn programs from examples from my PhD and you need three things for it that neural nets lack: a) generalisation, b) the ability to learn recursive functions and c) higher-order representations (i.e. quantified variables).
Personally, I was very excited with DeepMind's differentiable neural computers, but it seems very hard to train on anything but toy problems.
Simple, narrow stuff that requires expensive experts in AI to keep updating along with good hardware. Alternatively, you can pay a cheaper human to more jobs even better with modern tools that make most of them easy. It's what almost all successful and almost-successful companies do.
I keep linking to this page:
https://blog.keras.io/the-limitations-of-deep-learning.html
But what Chollet says is still the case. Machine-learning a mapping from arbitrary specifications to programs is many, many times more difficult than classification. Unless someone comes up with a completely new architecture that is for neural program synthesis what CNNs are for vision and LSTMs for sequence learning, and then some, then there's not going to be any big advances in the field.
Source: I study algorithms that learn programs from examples from my PhD and you need three things for it that neural nets lack: a) generalisation, b) the ability to learn recursive functions and c) higher-order representations (i.e. quantified variables).
Personally, I was very excited with DeepMind's differentiable neural computers, but it seems very hard to train on anything but toy problems.