Hacker News new | past | comments | ask | show | jobs | submit login

I agree that GIT limits human provers as much as it limits automated ones, but I don't agree that it's irrelevant to the issue.

A human mathematician can lean on intuition in ways that a computer can't. Given a theorem that resists proof, a human might develop a feel for whether it's worth continuing to prove via conventional means or whether the troublesome territory is better approached from a different axiom set. That intuition may be driven by the problematic theorem being one of the true-but-unprovable ones that Godel assures us exists without that cause being apparent to the mathematician.

The automated prover, on the other hand, faces a halting problem here. Unless it is given some explicit reason to expect that the target is not attainable, it might try forever. So such a prover might need to be GIT-aware in ways that a human doesn't.




As a human this explanation is emotionally satisfying. It appeals to my sense of human exceptionalism. However how many true but unprovable theorems do we know about? Has anyone ever bothered to compile a list of true but unprovable under the axioms xyz? I would think it is exceedingly hard to manage such research as every axiom set xyz would require companion axioms abc to be provable, and such theorems (at least the non-trivial interesting ones) are possibly hard to reach.

This intuition for unprovableness may just be a mechanism for escaping local minima that gets lucky often enough.


> true but unprovable under the axioms xyz?

Careful: we don't really talk about things which are "true but unprovable" in first-order logics.

By Goedel's completeness theorem, if a statement is true in all models of a first-order theory then it is provable. For any recursively axiomisable first-order theory of sufficient complexity, there exists some sentence which isn't provable [Goedel's First Incompleteness Theorem]: by contraposition of completeness, there exist models of the theory where the sentence is true, and models where the sentence is false.

So when you say "true but unprovable", you're probably talking about truth in the so-called "intended model": if something is independent of the first-order system (e.g. ZF) you're using as a foundation of mathematics, then you get to decide whether it's true or not! Once you've proven independence via e.g. forcing, it's up to you to decide whether the sentence should hold in your intended model: then you can add the appropriate axiom to your system.

Choice is independent from ZF: most mathematicians are OK with choice, so they chose to work in ZFC, but some aren't. Neither is "more valid" or "more true" from a purely logical perspective.



That's certainly true, but Godel-type unprovability is separate from logical independence, and indeed much harder to make lists of.

My earlier point is not that we could know for sure that our struggles are due to being up against such a wall--just that our ability to suspect as much is relevant to how we allocate our time.


Oh, such as a list of statements unprovable under a specific Godel numbering? Definitely much more difficult - I never saw such a list in grad school, although I wouldn't rule out the existence of one.


> Godel-type unprovability is separate from logical independence

What do you mean by this? The Goedel sentence of a system is logically independent from the system by the standard meaning of "logical independence".


One could argue human intuition is just statistical sampling.

In effect, we might not be able to prove something, but we've observed that when used "assumed to be true" in other contexts, it seems to yield our expected result.

I don't see why we couldn't build a similar system of intuition into a computer as well.


> One could argue human intuition is just statistical sampling.

Human intuition lets us walk and drive cars. It has proven really hard to replicate using computers. I really doubt we will solve this any time soon.


Human intuition can also add small numbers and recognize faces, and computers can do those already. Computers replicating all of human intuition is not a prerequisite for computers beating one aspect of human intuition.


That's where ML comes in, to provide the 'intuition' for pruning the search space.


That's simply not true. The kinds of statements that are unprovable aren't "intuitive". Gödel provided an algorithm for creating them!


But there's no reason to believe that his algorithm gives us all of them.

Nobody has proven, for instance, that the Goldbach conjecture is unprovable, but the collective intuition of the community of number theorists is such that nobody spends much time trying to prove it anymore.

What, aside from some coded-in GIT awareness-would lead an automated prover to do the same?


Why can't an automated prover have intuition?

Today's automated provers don't, but then not so many years ago the successful computer chess players were all these boring rules-based brute force engines. And then DeepMind showed that a very different approach is better.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: