In 1997, machines defeated a World Chess Champion for the first time, using brute-force "dumb search." Critics noted that while "dumb search" worked for chess, it might not necessarily be a general strategy applicable to other cognitive tasks.[a]
In 2016, machines defeated a World Go Champion for the first time, using a clever form of "dumb search" that leverages compute, DNNs, reinforcement learning (RL), and self-play. Critics noted that while this fancy form of "dumb search" worked for Go, it might not necessarily be a general strategy applicable to other cognitive tasks.[a]
In 2024, machines solved insanely hard math problems at the Silver Medal level in an International Math Olympiad for the first time, using a clever form of "dumb search" that leverages compute, DNNs, RL, and a formal language. Perhaps "dumb search" over cleverly pruned spaces isn't as dumb as the critics would like it to be?
The success in Go was very much not dumb search. Where dumb search had failed to achieve the level of even a very weak player, the neural net's "intuition" about good moves without any search was already very strong. Only the combination was superhuman, and that was anything but the dumb search that had been tried before.
Today's announcement is also not about proving Lean theorems by "dumb search". The success is about search + neural networks.
You're attacking critics for criticizing the solution that has failed, while confusing it for the solution that works to this day.
> My point is that you're attacking a position that no real critic holds
I'm not so sure. I wrote my comment after seeing quite a few comments on other threads here that read like fancy versions of "this is just brute-force search over cleverly pruned trees." Search the other threads here, and you'll see what I mean.
A brute force search has perfect knowledge. Calling it "dumb" encourages bad analogies -- it's "dumb" because it doesn't require advanced reasoning. It's also "genius" because it always gets the right answer eventually. It's hugely expensive to run.
And you keep shifting the goalpost on what's called "dumb" here.
By its nature hard math problems do not have fast algorithmic solutions--you can only solve them by search or clever search. Mathematicians have heuristics on helping us decide intuitively what's going to work what can make progress. But in the end--there is only search
High efficiency "search" is necessary to reach AGI. For example, humans don't search millions of potentially answers to beat ARC Prize puzzles. Instead, humans use our core experience to shrink the search space "intuitively" and deterministically check only a handful of ideas. I think deep-learning guided search is an incredibly promising research direction.
By "dumb" I assume you meant brute-force. Search, as opposed to extrapolation, is what actually produces suprise or creative results, whether it happens in a person's head or on a computer. The issue is to produce the heuristics that can let one push back the combinatorial explosion.
It kinda had to be this way, I think. There's a something from nothing problem. Douglas Adams brilliantly starts at this point.
We don't understand something from nothing. We don't even have the language to describe it. Concepts like "complexity" are frustratingly resistant to formalization.
"There is no free will." Has recently resurged as a philosophical position... like it did in response to Newton's mechanics.
Matter from void. Life from minerals. Consciousness from electrons. Free will from deterministic components. Smart reasoning & intelligent rationalisation from dumb search, computation, DNNs and such.
I don't think this debate was supposed to be ended by anything short of empirical demonstration.
Endnote: deep blue's victory over Gary was a bunch of preprogrammed bulls--t. Rematch!
If one of the lessons of LLMs was that much of "common sense" is covered in the written language corpus - that is, perhaps, basic human intelligence is covered by language.
Then, should we expect less with mathematics where written language is the normal way knowledge is propagated, and where formal proofs are wanted? An important distinction here is the coupling of search (not LLM for this one), a formal math language, and theorem proving. Math intelligence may not be merely the math written corpus, but adding the formal language and theorem proving sounds pretty powerful.
All this still lacks self-directed goals. An intention. For now that's taken care of by the human asking questions.
>that is, perhaps, basic human intelligence is covered by language.
It isn't. That's why LLMs are still terrible at basic reasoning tasks. The dataset doesn't contain the human intelligence, just the end results. Nobody is training their LLMs on brain scans.
In 2016, machines defeated a World Go Champion for the first time, using a clever form of "dumb search" that leverages compute, DNNs, reinforcement learning (RL), and self-play. Critics noted that while this fancy form of "dumb search" worked for Go, it might not necessarily be a general strategy applicable to other cognitive tasks.[a]
In 2024, machines solved insanely hard math problems at the Silver Medal level in an International Math Olympiad for the first time, using a clever form of "dumb search" that leverages compute, DNNs, RL, and a formal language. Perhaps "dumb search" over cleverly pruned spaces isn't as dumb as the critics would like it to be?
---
[a] http://www.incompleteideas.net/IncIdeas/BitterLesson.html