My bet is understanding the fundamental principles. Like building an airbus plane or starship requires fundamental understanding of aerodynamic principles, chemistry, materials and physics.
DNNs will definitely not get us there in their current form.
I am very curious to see if concepts from cognitive science and theoretical linguistics (like the Language of Thought paradigm as a framework of cognition or the Merge function as a fundamental cognitive operation) will be applied to machine learning. They seem to be some of the best candidates for the fundamental principles of cognition.
They don't need to solve the problem of reasoning, they only need to simulate reasoning well enough.
They are getting pretty good, people already have to "try" a little bit to find examples where GPT-3 or DALL-E are wrong. Give it a few more billion parameters and training data, and GPT-10 might still be as dumb as GPT-3 but it'll be impossible/irrelevant to prove.
DNNs will definitely not get us there in their current form.