Our world is increasingly defined by software without correctness proofs. Our tools are too clumsy, and we're just not smart enough, so we accept this situation. AI-verified code could become one of the most economically important applications of machine learning, when we cross the threshold where this becomes feasible.
I'm a mathematician, and we struggle with the purpose of a proof: Is it to verify, or also to explain so we can generalize? Machine proofs tend to be inscrutable.
While "the singularity" is unlikely anytime soon, we should recall that before the first atomic test there were calculations to insure that we didn't ignite the atmosphere. We're going to need proofs that we have control of AI, and there's an obvious conflict of interest in trusting AI's say so here. We need to understand these proofs.
A good thing about machine proofs is that, just like code, they can be refactored. I also use LLMs when writing code, but the code that I end up pushing is almost never exactly what the LLM has generated. I don't really see the problem with that. It's way less taxing for me mentally to get an LLM to generate definition and implementations and just refactor them quickly. I would expect LLMs for lean to be similar in the future.
Interesting thoughts, thanks. It seems to me a model trained to generate Lean could also be purposed to explain a large Lean proof, and that’s very interesting. So much of modern math is limited to extremely small cohorts.
None of that is dispositive inre provable safety, obviously.
I'm a mathematician, and we struggle with the purpose of a proof: Is it to verify, or also to explain so we can generalize? Machine proofs tend to be inscrutable.
While "the singularity" is unlikely anytime soon, we should recall that before the first atomic test there were calculations to insure that we didn't ignite the atmosphere. We're going to need proofs that we have control of AI, and there's an obvious conflict of interest in trusting AI's say so here. We need to understand these proofs.